Jason_A
10/01/2025, 6:09 PMMarvin
10/01/2025, 6:10 PMMarvin
10/01/2025, 6:17 PMenv, pfile) and uses a name that matches the pfile’s filename.
- Run prefect deploy to create/update all deployments.
- When you add a new pfile, add one more entry to prefect.yaml and run prefect deploy again.
Example prefect.yaml
Update the placeholders for your script path and flow function name.
name: my-project
prefect-version: ">=3.0.0"
# Define one or more deployments for the same flow
deployments:
- name: orders-dev
entrypoint: ./flows/my_flow.py:my_flow # path/to/script.py:flow_function_name
parameters:
env: dev
pfile: configs/orders.yml
work_pool:
name: my-pool
- name: customers-dev
entrypoint: ./flows/my_flow.py:my_flow
parameters:
env: dev
pfile: configs/customers.yml
work_pool:
name: my-pool
# If you also want prod variants for the same pfiles:
- name: orders-prod
entrypoint: ./flows/my_flow.py:my_flow
parameters:
env: prod
pfile: configs/orders.yml
work_pool:
name: my-pool
Create the work pool (once) and deploy
- If you don’t have a work pool yet:
prefect work-pool create my-pool --type process
- Create or update all deployments from prefect.yaml:
prefect deploy
- Or deploy a subset by name/pattern (useful when adding one new pfile):
prefect deploy -n customers-dev
prefect deploy -n "orders-*"
- Start a worker to pick up runs:
prefect worker start --pool my-pool
Adding a new pfile later
- Drop the new YAML (e.g., configs/inventory.yml) into your repo.
- Add a new deployment entry to prefect.yaml:
- name: inventory-dev
entrypoint: ./flows/my_flow.py:my_flow
parameters:
env: dev
pfile: configs/inventory.yml
work_pool:
name: my-pool
- Deploy just that one (or all):
prefect deploy -n inventory-dev
# or
prefect deploy
Tips
- Entry point format is ./path/to/file.py:flow_function_name (no extra Prefect code required in your script).
- The pfile path is resolved at runtime; if you later run in Docker/Kubernetes, ensure those files are in the image or mounted at the same relative path.
- You can keep the YAML DRY with YAML merge keys if you like, but the simple explicit form above is easiest to start with.
Docs for reference
- Prefect YAML and deployments: How to: prefect.yaml
- Deployments concepts: Deployments concepts
Quick questions to tailor this exactly:
- What’s the path to your script and the flow function name?
- Where do your pfile YAMLs live (e.g., configs/*.yml)?
- Are you running locally (process) or via Docker/Kubernetes? This affects how the pfile path should be made available at runtime.Jason_A
10/01/2025, 6:57 PMJason_A
10/01/2025, 6:58 PMMarvin
10/01/2025, 6:58 PMMarvin
10/01/2025, 7:00 PMsys.argv, there isn’t a built-in way to wire deployment parameters into sys.argv directly.
You’ve got three viable paths:
A) Recommended: Use flow parameters (no Prefect-specific code required)
- Keep your script’s @flow function taking env and pfile.
- Use prefect.yaml to set those parameters per deployment (as shown earlier).
- If you still want local CLI usage, add a tiny, Prefect-agnostic fallback in your code that uses sys.argv locally:
import sys
@flow
def my_flow(env: str | None = None, pfile: str | None = None):
if env is None or pfile is None:
# local CLI fallback
# e.g., python my_flow.py dev configs/orders.yml
_, env, pfile = sys.argv
...
This does not import anything from Prefect and keeps local CLI use working, while deployments use parameters.
B) No changes to your existing script: add a tiny wrapper entrypoint that sets sys.argv
- Create a separate small file for the entrypoint (your original script remains unchanged). The wrapper maps deployment parameters to sys.argv, then calls your existing flow.
- Caveat: this creates a parent flow (the wrapper) and your existing flow as a subflow in Prefect.
wrap_entry.py:
from prefect import flow
import sys
from pathlib import Path
# Import your existing flow without changing it
from my_flow import my_flow # my_flow is your existing @flow
@flow
def wrapper(env: str, pfile: str):
# Emulate CLI arguments so your script can keep using sys.argv
sys.argv = [Path(__file__).name, env, pfile]
# Call your original flow (it can still accept env/pfile directly, or read sys.argv internally)
return my_flow(env=env, pfile=pfile)
prefect.yaml (entrypoint points to wrapper, not your original file):
deployments:
- name: orders-dev
entrypoint: ./wrap_entry.py:wrapper
parameters:
env: dev
pfile: configs/orders.yml
work_pool:
name: my-pool
C) Environment variables per deployment (requires reading env vars in your script)
- Set per-deployment env vars via work pool job variables:
deployments:
- name: orders-dev
entrypoint: ./flows/my_flow.py:my_flow
parameters: {} # optional if you read only env vars
work_pool:
name: my-pool
job_variables:
env:
APP_ENV: dev
APP_PFILE: configs/orders.yml
- Update your script to read os.environ["APP_ENV"] and os.environ["APP_PFILE"]. This is a code change, but no Prefect-specific imports are needed.
Why you can’t just pass CLI args
- The worker executes the Prefect runner process (not python your_script.py ...). Prefect loads your flow and injects parameters; it does not propagate deployment parameters into sys.argv.
If you can accept a minimal change, option A keeps a single flow run per execution and is the cleanest. If you must not touch the original script at all, option B works with the small wrapper file at the cost of creating a parent flow + subflow.
References
- Prefect YAML + deployments: How to: prefect.yaml
- Deployments concepts: Deployments concepts
If you share your script’s entrypoint (path:function) and whether nested runs are acceptable, I can give you a drop-in wrapper and prefect.yaml tailored to your repo.