rania3
10/01/2025, 1:51 PMMarvin
10/01/2025, 1:51 PMMarvin
10/01/2025, 2:00 PM# 1) Get the default template
prefect work-pool get-default-base-job-template --type kubernetes > base-job-template.json
# 2) Edit base-job-template.json — add annotations here:
{
"job_configuration": {
"job_manifest": {
"apiVersion": "batch/v1",
"kind": "Job",
"metadata": {
"generateName": "{{ name }}-",
"labels": "{{ labels }}",
"namespace": "{{ namespace }}"
},
"spec": {
"template": {
"metadata": {
"annotations": {
"<http://sidecar.istio.io/inject|sidecar.istio.io/inject>": "false",
"<http://example.com/team|example.com/team>": "data-eng"
}
},
"spec": {
"serviceAccountName": "{{ service_account_name }}",
"restartPolicy": "Never",
"containers": [
{
"name": "prefect-job",
"image": "{{ image }}",
"imagePullPolicy": "{{ image_pull_policy }}",
"env": "{{ env }}",
"args": "{{ command }}"
}
]
}
},
"ttlSecondsAfterFinished": "{{ finished_job_ttl }}"
}
},
"command": "{{ command }}",
"env": "{{ env }}",
"labels": "{{ labels }}",
"name": "{{ name }}",
"namespace": "{{ namespace }}",
"cluster_config": "{{ cluster_config }}"
},
"variables": { }
}
# 3) Create or update the work pool with this template
prefect work-pool create "my-k8s-pool" --type kubernetes --base-job-template ./base-job-template.json --overwrite
Then point your Helm-installed worker at that work pool (e.g., via env var PREFECT_WORK_POOL_NAME=my-k8s-pool in your worker chart values).
2) If your Helm chart supports passing a base job template
Some versions of the Prefect OSS worker chart allow provisioning or updating a work pool on startup. If you have fields like workPool.create, workPool.name, and workPool.baseJobTemplate, you can embed the same JSON from above directly in values.yaml so it’s applied when the worker starts:
workPool:
create: true
name: my-k8s-pool
type: kubernetes
baseJobTemplate: |
{
"job_configuration": {
"job_manifest": {
"apiVersion": "batch/v1",
"kind": "Job",
"metadata": { "generateName": "{{ name }}-", "labels": "{{ labels }}", "namespace": "{{ namespace }}" },
"spec": {
"template": {
"metadata": {
"annotations": {
"<http://sidecar.istio.io/inject|sidecar.istio.io/inject>": "false",
"<http://example.com/team|example.com/team>": "data-eng"
}
},
"spec": {
"serviceAccountName": "{{ service_account_name }}",
"restartPolicy": "Never",
"containers": [
{ "name": "prefect-job", "image": "{{ image }}", "imagePullPolicy": "{{ image_pull_policy }}", "env": "{{ env }}", "args": "{{ command }}" }
]
}
},
"ttlSecondsAfterFinished": "{{ finished_job_ttl }}"
}
},
"command": "{{ command }}",
"env": "{{ env }}",
"labels": "{{ labels }}",
"name": "{{ name }}",
"namespace": "{{ namespace }}",
"cluster_config": "{{ cluster_config }}"
},
"variables": { }
}
Important distinctionMarvin
10/01/2025, 2:00 PMpodAnnotations or worker.podAnnotations in the Helm chart) only annotate the worker Deployment/Pod.
- Flow-run pods are created by the work pool’s base job template; annotate them under job_configuration.job_manifest.spec.template.metadata.annotations as shown above.
Docs and references
- Deploy a worker with Helm: Deploy a worker using Helm
- Manage work pools and base job templates: Manage work pools
- Kubernetes integration overview: Prefect Kubernetes integration
- Prefect Helm charts repo: prefect-helm
If you share which annotations you need (e.g., Istio, IRSA, Linkerd), I can tailor the exact template and, if needed, expose them as variables so you can override per-deployment.