<@ULVA73B9P> when deploying an app, what is the be...
# ask-marvin
c
@Marvin when deploying an app, what is the best way to pass environment variables to the flows?
m
thought for 245.1 seconds
Great question. In Prefect 3.x, the cleanest, most scalable way to pass environment variables to your flows is to use work pool job templates for shared defaults and override per-deployment when needed. Here’s the playbook: - Put shared env for many flows in your work pool’s base job template (ideal for your 50–100 flows) - Override per deployment with
job_variables.env
in
prefect.yaml
or in code via
deploy(...)
- Use Blocks or platform secrets for sensitive values (don’t hardcode secrets in
prefect.yaml
) - Use
.env
only for local development, not production How to set env vars 1) Work pool defaults (best for org-wide/shared config) - Get the default template for your worker type, edit it to add defaults under
variables.env.default
, and update the pool.
Copy code
prefect work-pool get-default-base-job-template --type process > process-template.json
# edit process-template.json and set variables.env.default = {"DATA_DIR": "/data", "API_URL": "..."}
prefect work-pool update my-pool --base-job-template process-template.json
This applies to all deployments that target
my-pool
unless they override it. Docs: Customize job variablesManage work pools 2) Per-deployment overrides (in prefect.yaml) - Add an
env
dict under `job_variables`:
Copy code
deployments:
- name: prod
  entrypoint: flows/my_flow.py:my_flow
  work_pool:
    name: my-pool
  job_variables:
    env:
      DATA_DIR: /mnt/prod-data
      FEATURE_FLAG: "on"
Run
prefect deploy
from your project to register this. 3) Per-deployment overrides (in code)
Copy code
from prefect import flow

f = flow.from_source(
    "<https://github.com/org/repo>",
    entrypoint="flows/my_flow.py:my_flow",
)

f.deploy(
    name="prod",
    work_pool_name="my-pool",
    job_variables={"env": {"DATA_DIR": "/mnt/prod-data", "FEATURE_FLAG": "on"}},
)
4) Secrets - Prefer Prefect Secret blocks for sensitive values and load them at runtime in your flow:
Copy code
from prefect.blocks.system import Secret

@flow
def my_flow():
    token = Secret.load("my-token").get()
Docs: Store secrets - If a library must read secrets from environment variables (not via Python), use your platform’s secret mechanism: - Kubernetes: reference Secrets via
envFrom
or
env.valueFrom
in a custom job template - Docker/Podman: use Docker/Podman secrets or pass env via job variables (non-secret) and ensure your registry creds are configured via the runtime Environment variable precedence and behavior - Process worker - Flow run env = worker process env overlaid with job variables (
env
from deployment > pool defaults) - If the same key exists in both, the deployment’s
job_variables.env
wins - Docker/Kubernetes workers - Flow run env = container image env overlaid with job variables (
env
from deployment > pool defaults) - Worker host env is not inherited into the container unless explicitly passed - For Kubernetes, you can also pull from K8s Secrets via a custom job template - Prefect settings (PREFECT_*) - If you’re setting Prefect’s own config via env, normal Prefect settings precedence applies: active profile/environment overrides defaults - Docs: Settings and profiles Quick examples - Minimal
prefect.yaml
with env:
Copy code
deployments:
- name: etl-prod
  entrypoint: flows/etl.py:etl
  work_pool:
    name: process-pool
  job_variables:
    env:
      DATA_DIR: /data/etl
      LOG_LEVEL: INFO
- Kubernetes job template snippet to load secrets (optional) - Get the default, then add:
Copy code
spec:
  template:
    spec:
      containers:
      - name: prefect-job
        envFrom:
        - secretRef:
            name: my-app-secrets
Then set that as your pool’s base job template.
Given your environment - With RHEL9 and Podman allowed, if you use Docker workers, ensure Podman’s Docker-compatible socket is enabled so the Docker SDK can talk to it; otherwise consider process workers (best for local filesystem access) or Kubernetes if you have a cluster. - For 50–100 flows in the same Conda environment with shared config, putting shared env in the work pool base job template will minimize duplication, and then override per-deployment only where necessary. Questions to tailor this for you - Which worker type(s) are you using (process, docker via Podman, or kubernetes)? - Do you need secrets exposed as environment variables to third-party libs, or is using Prefect Secret blocks acceptable? Happy to provide a concrete template for your worker type once I know which you’re running.