Skip to content

CLI Reference

CLI Reference

After installing epftoolbox2, an epf command is available on your PATH.

Terminal window
pip install epftoolbox2
epf --help

Interactive Mode

Running epf with no arguments launches a wizard-style interactive terminal where you can build and run pipelines without writing any YAML.

Terminal window
epf

You’ll see a menu with arrow-key navigation:

Run experiment from YAML
Run data pipeline from YAML → CSV
Run model pipeline from YAML
──────────────────────────────────
Build & run data pipeline interactively
Build & run model pipeline interactively
──────────────────────────────────
Exit

Build & run data pipeline interactively walks you through:

  1. Adding sources (EntsoeSource, OpenMeteoSource, CsvSource, CalendarSource) with prompted config
  2. Adding transformers (optional)
  3. Adding validators (optional)
  4. Setting date range and caching
  5. Choosing an output CSV path
  6. Saving a reusable data_pipeline.yaml
  7. Running immediately

Build & run model pipeline interactively walks you through:

  1. Loading an input CSV (from a previous data run)
  2. Adding models — available columns are shown as a checkbox list for predictor selection
  3. Adding evaluators and exporters
  4. Setting the test period, target column, and horizon
  5. Saving a reusable model_pipeline.yaml
  6. Running immediately

epf run — Full workflow

Run a complete experiment from a Workflow YAML.

Terminal window
epf run experiment.yaml

Options:

FlagDescription
--model-index NRun only models[N]
--processes NOverride max_processes
--threads NOverride threads_per_process
--dry-runBuild both pipelines without executing — validates config

Examples:

Terminal window
# Basic run
epf run experiment.yaml
epf run experiment.yaml --model-index 2
# Validate config without running
epf run experiment.yaml --dry-run

epf data — Data pipeline only

Run a DataPipeline YAML and save the result as a CSV.

Terminal window
epf data data_pipeline.yaml --start 2022-01-01 --end 2024-01-01 --output data.csv

Required flags:

FlagDescription
--start DATEFetch start date (YYYY-MM-DD)
--end DATEFetch end date (YYYY-MM-DD)
--output FILEOutput CSV path

Optional flags:

FlagDescription
--cache PATHCache directory or CSV file for source data

Example:

Terminal window
epf data data_pipeline.yaml \
--start 2020-01-01 \
--end 2024-01-01 \
--output data.csv \
--cache .cache/

epf model — Model pipeline only

Run a ModelPipeline YAML on an existing CSV dataset (e.g. produced by epf data).

Terminal window
epf model model_pipeline.yaml \
--data data.csv \
--test-start 2023-01-01 \
--test-end 2024-01-01

Required flags:

FlagDescription
--data FILEInput CSV file
--test-start DATETest period start
--test-end DATETest period end

Optional flags:

FlagDescription
--target COLTarget column name (default: price)
--horizon NForecast horizon in days (default: 7)
--save-dir DIRSave per-model prediction JSONL files to this directory
--forecast-onlySkip evaluators and exporters

epf validate — Validate any pipeline YAML

Load and parse a YAML file, instantiate all components, and report errors — without running anything.

Terminal window
epf validate data_pipeline.yaml
epf validate model_pipeline.yaml
epf validate experiment.yaml

The file type is auto-detected from its top-level keys. Exits with code 1 if validation fails.


epf init — Scaffold template files

Create experiment.yaml, data_pipeline.yaml, and model_pipeline.yaml with placeholder values in the current directory.

Terminal window
epf init
epf init --force # overwrite existing files