<@ULVA73B9P> how can I update the job manifest for...
# ask-marvin
s
@Marvin how can I update the job manifest for a single deployment?
m
thought for 254.4 seconds
Great question. Assuming you’re on Prefect 3.x and using a Kubernetes work pool, the base “job manifest” (pod spec) comes from the work pool’s job template. To change it for a single deployment, you override the template’s variables with
job_variables
on that deployment. Key points: - The base template lives on the work pool. Edit it in the UI (Work Pools -> your pool -> Job template) or via API if you need to change structure (e.g., add volumes/initContainers). For structural changes, create/update a separate work pool. - For per-deployment tweaks (image, env vars, resources, nodeSelector, tolerations, serviceAccountName, namespace, labels/annotations, imagePullSecrets, etc.), set
job_variables
on the deployment. The keys must match variables used by your pool’s job template. Docs: - Kubernetes work pools - Customize job variables - Manage work pools and templates Python API example (override for one deployment)
Copy code
from prefect import flow

@flow
def my_flow():
    ...

# If your code lives in a repo, use from_source; otherwise load locally.
flow = flow.from_source(
    source="<https://github.com/org/repo>",
    entrypoint="flows/my_flow.py:my_flow",
)

deployment_id = flow.deploy(
    name="my-k8s-deployment",
    work_pool_name="k8s-pool",
    # You can also set the image via `image="<http://ghcr.io/org/app:123|ghcr.io/org/app:123>"` here;
    # it will populate the template's image variable when present.
    job_variables={
        "image": "<http://ghcr.io/org/app:123|ghcr.io/org/app:123>",
        "namespace": "data",
        "service_account_name": "prefect-runner",
        "env": [
            {"name": "EXTRA_CONFIG", "value": "true"},
            {"name": "LOG_LEVEL", "value": "INFO"},
        ],
        "resources": {
            "requests": {"cpu": "500m", "memory": "1Gi"},
            "limits": {"cpu": "1", "memory": "2Gi"},
        },
        "node_selector": {"<http://kubernetes.io/os|kubernetes.io/os>": "linux"},
        "tolerations": [
            {"key": "workload", "operator": "Equal", "value": "data", "effect": "NoSchedule"}
        ],
        "image_pull_secrets": [{"name": "regcred"}],
        "labels": {"team": "data-eng"},
        "annotations": {"<http://prometheus.io/scrape|prometheus.io/scrape>": "true"},
    },
)
print("Deployment created:", deployment_id)
prefect.yaml example (deploy with CLI)
Copy code
# prefect.yaml
name: my-project
# optional
# build: []
# push: []

deployments:
  - name: my-k8s-deployment
    entrypoint: flows/my_flow.py:my_flow
    work_pool:
      name: k8s-pool
      job_variables:
        image: <http://ghcr.io/org/app:123|ghcr.io/org/app:123>
        namespace: data
        service_account_name: prefect-runner
        env:
          - name: EXTRA_CONFIG
            value: "true"
          - name: LOG_LEVEL
            value: "INFO"
        resources:
          requests:
            cpu: "500m"
            memory: "1Gi"
          limits:
            cpu: "1"
            memory: "2Gi"
        node_selector:
          <http://kubernetes.io/os|kubernetes.io/os>: linux
        tolerations:
          - key: workload
            operator: Equal
            value: data
            effect: NoSchedule
        image_pull_secrets:
          - name: regcred
        labels:
          team: data-eng
        annotations:
          <http://prometheus.io/scrape|prometheus.io/scrape>: "true"
Then run:
Copy code
prefect deploy
This reads prefect.yaml and applies the per-deployment overrides. If you only want to deploy this one, you can filter by name. When to change the work pool instead: - If you need to modify the structure of the job (e.g., add initContainers, custom volume mounts, sidecars, or new fields that the default template doesn’t expose as variables), edit the work pool’s job template or create a new work pool with your custom template. Then point your deployment at that pool. Common gotchas:
- If your override appears to be ignored, verify the keys in
job_variables
match variables defined in the work pool’s job template. - Ensure your worker is connected to the same work pool you’re targeting. If you can share which fields you need to change (e.g., adding tolerations vs. volumes), I can tailor the exact
job_variables
snippet for your pool’s template.