gee
08/21/2025, 4:22 PMMarvin
08/21/2025, 4:22 PMMarvin
08/21/2025, 4:26 PMk8s-dev
and k8s-prod
pools. Workers in each env only pick up work from their pool, giving you hard isolation.
- Two deployments per flow (one per env)
- Name them clearly and tag them, e.g., my-flow-dev
(tags: env:dev
) and my-flow-prod
(tags: env:prod
).
- Point each deployment at its env’s work_pool/work_queue and set env-specific schedule, parameters, and job variables (like env vars).
- Environment-specific config
- Prefer worker/pool job variables for env vars that should differ by env.
- Optionally use Prefect Variables for config you read at runtime (use distinct keys per env, e.g., db_url_dev
vs db_url_prod
).
- Local dev can rely on Prefect profiles and .env
files.
Automations: scope by environment
- Easiest and most flexible: filter by deployment tags (env:dev
, env:prod
).
- You can also filter by work pool name (e.g., only k8s-prod
) or by deployment name pattern (e.g., ends with -prod
).
- Create separate automations for each env and apply the filter. For example:
- “On flow run failure” → Filter: deployment has tag env:prod
→ Action: send Slack/Email, open incident, etc.
- “SLA/late starts” → Filter: work pool is k8s-prod
→ Action: notify or cancel.
Deployments: examples
Using Python (we verified these method signatures):
from prefect.flows import Flow
flow = Flow.from_source(
"<https://github.com/your-org/your-repo>",
"flows/my_flow.py:my_flow",
)
# DEV
flow.deploy(
name="my-flow-dev",
work_pool_name="k8s-dev",
tags=["env:dev"],
parameters={"debug": True},
cron="*/5 * * * *", # runs every 5 minutes
job_variables={"env": {"LOG_LEVEL": "DEBUG", "API_BASE_URL": "<https://dev.api>"}} # env vars passed to worker
)
# PROD
flow.deploy(
name="my-flow-prod",
work_pool_name="k8s-prod",
tags=["env:prod"],
parameters={"debug": False},
cron="0 * * * *", # hourly
job_variables={"env": {"LOG_LEVEL": "INFO", "API_BASE_URL": "<https://api>"}}
)
Using prefect.yaml (checked pattern in docs)
- Put both deployments in a single file and run prefect deploy
from your CI per environment.
name: my-project
deployments:
- name: my-flow-dev
entrypoint: flows/my_flow.py:my_flow
work_pool: k8s-dev
tags: ["env:dev"]
parameters:
debug: true
schedules:
- cron: "*/5 * * * *"
job_variables:
env:
LOG_LEVEL: DEBUG
API_BASE_URL: <https://dev.api>
- name: my-flow-prod
entrypoint: flows/my_flow.py:my_flow
work_pool: k8s-prod
tags: ["env:prod"]
parameters:
debug: false
schedules:
- cron: "0 * * * *"
job_variables:
env:
LOG_LEVEL: INFO
API_BASE_URL: <https://api>
Automations: where to configure
- UI: Create two automations and add a filter:
- Filter → “Deployment tags contains env:prod” (for prod) or “env:dev” (for dev).
- Or filter by Work pool → equals k8s-prod
or k8s-dev
.
- CLI/Python: You can create automations with filters; see docs for filter schemas and CLI usage.
Helpful links
- Prefect YAML (multiple deployments): prefect.yaml guide
- Automations CLI: Automation CLI reference
- Filter schemas used by automations: Filters (API reference)
- Work pools overview: Work pools and workers blog
Want me to draft a concrete template for your stack? If you can share:
- Your infra (Docker, K8s, VM)
- Desired schedules
- Any env-specific parameters or env varsMarvin
08/21/2025, 4:26 PMprefect.yaml
, deployment code, and example automation configurations scoped to dev vs prod.