<@ULVA73B9P> If I have the same flows that I want ...
# ask-marvin
g
@Marvin If I have the same flows that I want do deploy to dev and prod environment. What is the standard pattern to handle automation for each environment separately?
m
thought for 226.5 seconds
Great question — the standard pattern in Prefect 3.x is to create separate deployments and infrastructure per environment, then scope automations using tags or filters. Here’s a simple, proven setup: Recommended structure - Separate work pools (and optionally queues) per env - Example:
k8s-dev
and
k8s-prod
pools. Workers in each env only pick up work from their pool, giving you hard isolation. - Two deployments per flow (one per env) - Name them clearly and tag them, e.g.,
my-flow-dev
(tags:
env:dev
) and
my-flow-prod
(tags:
env:prod
). - Point each deployment at its env’s work_pool/work_queue and set env-specific schedule, parameters, and job variables (like env vars). - Environment-specific config - Prefer worker/pool job variables for env vars that should differ by env. - Optionally use Prefect Variables for config you read at runtime (use distinct keys per env, e.g.,
db_url_dev
vs
db_url_prod
). - Local dev can rely on Prefect profiles and
.env
files. Automations: scope by environment - Easiest and most flexible: filter by deployment tags (
env:dev
,
env:prod
). - You can also filter by work pool name (e.g., only
k8s-prod
) or by deployment name pattern (e.g., ends with
-prod
). - Create separate automations for each env and apply the filter. For example: - “On flow run failure” → Filter: deployment has tag
env:prod
→ Action: send Slack/Email, open incident, etc. - “SLA/late starts” → Filter: work pool is
k8s-prod
→ Action: notify or cancel. Deployments: examples Using Python (we verified these method signatures):
Copy code
from prefect.flows import Flow

flow = Flow.from_source(
    "<https://github.com/your-org/your-repo>",
    "flows/my_flow.py:my_flow",
)

# DEV
flow.deploy(
    name="my-flow-dev",
    work_pool_name="k8s-dev",
    tags=["env:dev"],
    parameters={"debug": True},
    cron="*/5 * * * *",  # runs every 5 minutes
    job_variables={"env": {"LOG_LEVEL": "DEBUG", "API_BASE_URL": "<https://dev.api>"}}  # env vars passed to worker
)

# PROD
flow.deploy(
    name="my-flow-prod",
    work_pool_name="k8s-prod",
    tags=["env:prod"],
    parameters={"debug": False},
    cron="0 * * * *",  # hourly
    job_variables={"env": {"LOG_LEVEL": "INFO", "API_BASE_URL": "<https://api>"}}
)
Using prefect.yaml (checked pattern in docs) - Put both deployments in a single file and run
prefect deploy
from your CI per environment.
Copy code
name: my-project

deployments:
  - name: my-flow-dev
    entrypoint: flows/my_flow.py:my_flow
    work_pool: k8s-dev
    tags: ["env:dev"]
    parameters:
      debug: true
    schedules:
      - cron: "*/5 * * * *"
    job_variables:
      env:
        LOG_LEVEL: DEBUG
        API_BASE_URL: <https://dev.api>

  - name: my-flow-prod
    entrypoint: flows/my_flow.py:my_flow
    work_pool: k8s-prod
    tags: ["env:prod"]
    parameters:
      debug: false
    schedules:
      - cron: "0 * * * *"
    job_variables:
      env:
        LOG_LEVEL: INFO
        API_BASE_URL: <https://api>
Automations: where to configure - UI: Create two automations and add a filter: - Filter → “Deployment tags contains env:prod” (for prod) or “env:dev” (for dev). - Or filter by Work pool → equals
k8s-prod
or
k8s-dev
. - CLI/Python: You can create automations with filters; see docs for filter schemas and CLI usage. Helpful links - Prefect YAML (multiple deployments): prefect.yaml guide - Automations CLI: Automation CLI reference - Filter schemas used by automations: Filters (API reference) - Work pools overview: Work pools and workers blog Want me to draft a concrete template for your stack? If you can share: - Your infra (Docker, K8s, VM) - Desired schedules - Any env-specific parameters or env vars
I’ll generate a ready-to-use
prefect.yaml
, deployment code, and example automation configurations scoped to dev vs prod.