<@ULVA73B9P> , I have a python script that contain...
# ask-marvin
j
@Marvin , I have a python script that contains a single flow. The script requires 2 parameters, env and pfile. The env parameter is either dev or prod. The pfile is a parameter that is a relative path to a yml file containing data required for the run. I am using prefect and trying to understand how to create deployments for this script where each deployment will have a different pfile value. I do not want to add any additional prefect code into the script. I want to define the deployments with the name that corresponds to the pfile filename. How can I do this in a simple way to initially create a deployment for each pfile I have? Also, how would I add a new deployment once we create a new pfile yml file?
m
thought for 428.4 seconds
Great question — you can do this cleanly in Prefect 3.x without touching your flow script by defining multiple deployments in a single prefect.yaml, each with different parameters (one per pfile) and a matching deployment name. High-level approach - Put your flow script and pfile YAMLs in a project directory. - Create a prefect.yaml that lists one deployment per pfile. - Each deployment sets the flow’s parameters (
env
,
pfile
) and uses a name that matches the pfile’s filename. - Run
prefect deploy
to create/update all deployments. - When you add a new pfile, add one more entry to prefect.yaml and run
prefect deploy
again. Example prefect.yaml Update the placeholders for your script path and flow function name.
Copy code
name: my-project
prefect-version: ">=3.0.0"

# Define one or more deployments for the same flow
deployments:
  - name: orders-dev
    entrypoint: ./flows/my_flow.py:my_flow   # path/to/script.py:flow_function_name
    parameters:
      env: dev
      pfile: configs/orders.yml
    work_pool:
      name: my-pool

  - name: customers-dev
    entrypoint: ./flows/my_flow.py:my_flow
    parameters:
      env: dev
      pfile: configs/customers.yml
    work_pool:
      name: my-pool

  # If you also want prod variants for the same pfiles:
  - name: orders-prod
    entrypoint: ./flows/my_flow.py:my_flow
    parameters:
      env: prod
      pfile: configs/orders.yml
    work_pool:
      name: my-pool
Create the work pool (once) and deploy - If you don’t have a work pool yet:
Copy code
prefect work-pool create my-pool --type process
- Create or update all deployments from prefect.yaml:
Copy code
prefect deploy
- Or deploy a subset by name/pattern (useful when adding one new pfile):
Copy code
prefect deploy -n customers-dev
prefect deploy -n "orders-*"
- Start a worker to pick up runs:
Copy code
prefect worker start --pool my-pool
Adding a new pfile later - Drop the new YAML (e.g.,
configs/inventory.yml
) into your repo. - Add a new deployment entry to prefect.yaml:
Copy code
- name: inventory-dev
    entrypoint: ./flows/my_flow.py:my_flow
    parameters:
      env: dev
      pfile: configs/inventory.yml
    work_pool:
      name: my-pool
- Deploy just that one (or all):
Copy code
prefect deploy -n inventory-dev
# or
prefect deploy
Tips - Entry point format is
./path/to/file.py:flow_function_name
(no extra Prefect code required in your script). - The
pfile
path is resolved at runtime; if you later run in Docker/Kubernetes, ensure those files are in the image or mounted at the same relative path. - You can keep the YAML DRY with YAML merge keys if you like, but the simple explicit form above is easiest to start with. Docs for reference - Prefect YAML and deployments: How to: prefect.yaml - Deployments concepts: Deployments concepts Quick questions to tailor this exactly: - What’s the path to your script and the flow function name? - Where do your pfile YAMLs live (e.g.,
configs/*.yml
)? - Are you running locally (process) or via Docker/Kubernetes? This affects how the
pfile
path should be made available at runtime.
j
For this solution, I see the parameters created for each deployment in the prefect.yaml file. These parameters are sent into the flow call but are not script parameters that can be accessed via the sys.argv calls. How can I adjust a deployment so that it supports script variables as args?
@Marvin For this solution, I see the parameters created for each deployment in the prefect.yaml file. These parameters are sent into the flow call but are not script parameters that can be accessed via the sys.argv calls. How can I adjust a deployment so that it supports script variables as args?
m
thought for 127.2 seconds
Short answer: Prefect deployments don’t pass arbitrary CLI args to your script’s sys.argv. In Prefect 3.x, deployments pass values as flow parameters. If your script expects to read
sys.argv
, there isn’t a built-in way to wire deployment parameters into
sys.argv
directly. You’ve got three viable paths: A) Recommended: Use flow parameters (no Prefect-specific code required) - Keep your script’s
@flow
function taking
env
and
pfile
. - Use prefect.yaml to set those parameters per deployment (as shown earlier). - If you still want local CLI usage, add a tiny, Prefect-agnostic fallback in your code that uses sys.argv locally:
Copy code
import sys

@flow
def my_flow(env: str | None = None, pfile: str | None = None):
    if env is None or pfile is None:
        # local CLI fallback
        # e.g., python my_flow.py dev configs/orders.yml
        _, env, pfile = sys.argv
    ...
This does not import anything from Prefect and keeps local CLI use working, while deployments use parameters. B) No changes to your existing script: add a tiny wrapper entrypoint that sets sys.argv - Create a separate small file for the entrypoint (your original script remains unchanged). The wrapper maps deployment parameters to sys.argv, then calls your existing flow. - Caveat: this creates a parent flow (the wrapper) and your existing flow as a subflow in Prefect. wrap_entry.py:
Copy code
from prefect import flow
import sys
from pathlib import Path

# Import your existing flow without changing it
from my_flow import my_flow  # my_flow is your existing @flow

@flow
def wrapper(env: str, pfile: str):
    # Emulate CLI arguments so your script can keep using sys.argv
    sys.argv = [Path(__file__).name, env, pfile]
    # Call your original flow (it can still accept env/pfile directly, or read sys.argv internally)
    return my_flow(env=env, pfile=pfile)
prefect.yaml (entrypoint points to wrapper, not your original file):
Copy code
deployments:
  - name: orders-dev
    entrypoint: ./wrap_entry.py:wrapper
    parameters:
      env: dev
      pfile: configs/orders.yml
    work_pool:
      name: my-pool
C) Environment variables per deployment (requires reading env vars in your script) - Set per-deployment env vars via work pool job variables:
Copy code
deployments:
  - name: orders-dev
    entrypoint: ./flows/my_flow.py:my_flow
    parameters: {}  # optional if you read only env vars
    work_pool:
      name: my-pool
      job_variables:
        env:
          APP_ENV: dev
          APP_PFILE: configs/orders.yml
- Update your script to read
os.environ["APP_ENV"]
and
os.environ["APP_PFILE"]
. This is a code change, but no Prefect-specific imports are needed. Why you can’t just pass CLI args - The worker executes the Prefect runner process (not
python your_script.py ...
). Prefect loads your flow and injects parameters; it does not propagate deployment parameters into
sys.argv
. If you can accept a minimal change, option A keeps a single flow run per execution and is the cleanest. If you must not touch the original script at all, option B works with the small wrapper file at the cost of creating a parent flow + subflow. References - Prefect YAML + deployments: How to: prefect.yaml - Deployments concepts: Deployments concepts If you share your script’s entrypoint (path:function) and whether nested runs are acceptable, I can give you a drop-in wrapper and prefect.yaml tailored to your repo.