CLI Reference
CLI Reference
After installing epftoolbox2, an epf command is available on your PATH.
pip install epftoolbox2epf --helpInteractive Mode
Running epf with no arguments launches a wizard-style interactive terminal where you can build and run pipelines without writing any YAML.
epfYou’ll see a menu with arrow-key navigation:
Run experiment from YAML Run data pipeline from YAML → CSV Run model pipeline from YAML ────────────────────────────────── Build & run data pipeline interactively Build & run model pipeline interactively ────────────────────────────────── ExitBuild & run data pipeline interactively walks you through:
- Adding sources (EntsoeSource, OpenMeteoSource, CsvSource, CalendarSource) with prompted config
- Adding transformers (optional)
- Adding validators (optional)
- Setting date range and caching
- Choosing an output CSV path
- Saving a reusable
data_pipeline.yaml - Running immediately
Build & run model pipeline interactively walks you through:
- Loading an input CSV (from a previous data run)
- Adding models — available columns are shown as a checkbox list for predictor selection
- Adding evaluators and exporters
- Setting the test period, target column, and horizon
- Saving a reusable
model_pipeline.yaml - Running immediately
epf run — Full workflow
Run a complete experiment from a Workflow YAML.
epf run experiment.yamlOptions:
| Flag | Description |
|---|---|
--model-index N | Run only models[N] |
--processes N | Override max_processes |
--threads N | Override threads_per_process |
--dry-run | Build both pipelines without executing — validates config |
Examples:
# Basic runepf run experiment.yaml
epf run experiment.yaml --model-index 2
# Validate config without runningepf run experiment.yaml --dry-runepf data — Data pipeline only
Run a DataPipeline YAML and save the result as a CSV.
epf data data_pipeline.yaml --start 2022-01-01 --end 2024-01-01 --output data.csvRequired flags:
| Flag | Description |
|---|---|
--start DATE | Fetch start date (YYYY-MM-DD) |
--end DATE | Fetch end date (YYYY-MM-DD) |
--output FILE | Output CSV path |
Optional flags:
| Flag | Description |
|---|---|
--cache PATH | Cache directory or CSV file for source data |
Example:
epf data data_pipeline.yaml \ --start 2020-01-01 \ --end 2024-01-01 \ --output data.csv \ --cache .cache/epf model — Model pipeline only
Run a ModelPipeline YAML on an existing CSV dataset (e.g. produced by epf data).
epf model model_pipeline.yaml \ --data data.csv \ --test-start 2023-01-01 \ --test-end 2024-01-01Required flags:
| Flag | Description |
|---|---|
--data FILE | Input CSV file |
--test-start DATE | Test period start |
--test-end DATE | Test period end |
Optional flags:
| Flag | Description |
|---|---|
--target COL | Target column name (default: price) |
--horizon N | Forecast horizon in days (default: 7) |
--save-dir DIR | Save per-model prediction JSONL files to this directory |
--forecast-only | Skip evaluators and exporters |
epf validate — Validate any pipeline YAML
Load and parse a YAML file, instantiate all components, and report errors — without running anything.
epf validate data_pipeline.yamlepf validate model_pipeline.yamlepf validate experiment.yamlThe file type is auto-detected from its top-level keys. Exits with code 1 if validation fails.
epf init — Scaffold template files
Create experiment.yaml, data_pipeline.yaml, and model_pipeline.yaml with placeholder values in the current directory.
epf initepf init --force # overwrite existing files