Scott Pedersen
10/15/2025, 3:09 AMMarvin
10/15/2025, 3:09 AMMarvin
10/15/2025, 3:13 AMjob_variables on that deployment.
Key points:
- The base template lives on the work pool. Edit it in the UI (Work Pools -> your pool -> Job template) or via API if you need to change structure (e.g., add volumes/initContainers). For structural changes, create/update a separate work pool.
- For per-deployment tweaks (image, env vars, resources, nodeSelector, tolerations, serviceAccountName, namespace, labels/annotations, imagePullSecrets, etc.), set job_variables on the deployment. The keys must match variables used by your pool’s job template.
Docs:
- Kubernetes work pools
- Customize job variables
- Manage work pools and templates
Python API example (override for one deployment)
from prefect import flow
@flow
def my_flow():
...
# If your code lives in a repo, use from_source; otherwise load locally.
flow = flow.from_source(
source="<https://github.com/org/repo>",
entrypoint="flows/my_flow.py:my_flow",
)
deployment_id = flow.deploy(
name="my-k8s-deployment",
work_pool_name="k8s-pool",
# You can also set the image via `image="<http://ghcr.io/org/app:123|ghcr.io/org/app:123>"` here;
# it will populate the template's image variable when present.
job_variables={
"image": "<http://ghcr.io/org/app:123|ghcr.io/org/app:123>",
"namespace": "data",
"service_account_name": "prefect-runner",
"env": [
{"name": "EXTRA_CONFIG", "value": "true"},
{"name": "LOG_LEVEL", "value": "INFO"},
],
"resources": {
"requests": {"cpu": "500m", "memory": "1Gi"},
"limits": {"cpu": "1", "memory": "2Gi"},
},
"node_selector": {"<http://kubernetes.io/os|kubernetes.io/os>": "linux"},
"tolerations": [
{"key": "workload", "operator": "Equal", "value": "data", "effect": "NoSchedule"}
],
"image_pull_secrets": [{"name": "regcred"}],
"labels": {"team": "data-eng"},
"annotations": {"<http://prometheus.io/scrape|prometheus.io/scrape>": "true"},
},
)
print("Deployment created:", deployment_id)
prefect.yaml example (deploy with CLI)
# prefect.yaml
name: my-project
# optional
# build: []
# push: []
deployments:
- name: my-k8s-deployment
entrypoint: flows/my_flow.py:my_flow
work_pool:
name: k8s-pool
job_variables:
image: <http://ghcr.io/org/app:123|ghcr.io/org/app:123>
namespace: data
service_account_name: prefect-runner
env:
- name: EXTRA_CONFIG
value: "true"
- name: LOG_LEVEL
value: "INFO"
resources:
requests:
cpu: "500m"
memory: "1Gi"
limits:
cpu: "1"
memory: "2Gi"
node_selector:
<http://kubernetes.io/os|kubernetes.io/os>: linux
tolerations:
- key: workload
operator: Equal
value: data
effect: NoSchedule
image_pull_secrets:
- name: regcred
labels:
team: data-eng
annotations:
<http://prometheus.io/scrape|prometheus.io/scrape>: "true"
Then run:
prefect deploy
This reads prefect.yaml and applies the per-deployment overrides. If you only want to deploy this one, you can filter by name.
When to change the work pool instead:
- If you need to modify the structure of the job (e.g., add initContainers, custom volume mounts, sidecars, or new fields that the default template doesn’t expose as variables), edit the work pool’s job template or create a new work pool with your custom template. Then point your deployment at that pool.
Common gotchas:Marvin
10/15/2025, 3:13 AMjob_variables match variables defined in the work pool’s job template.
- Ensure your worker is connected to the same work pool you’re targeting.
If you can share which fields you need to change (e.g., adding tolerations vs. volumes), I can tailor the exact job_variables snippet for your pool’s template.