Scott Pedersen
09/26/2025, 4:54 AMMarvin
09/26/2025, 4:55 AMMarvin
09/26/2025, 5:00 AMpod_watch_timeout_seconds
, and it lives under the work pool’s base job template in job_configuration
. You can set it either at the work-pool level (for all deployments) or per-deployment via job_variables
.
Two good ways to do it
1) Set it for the whole work pool (base job template)
- Pull the default Kubernetes job template:
prefect work-pool get-default-base-job-template --type kubernetes > k8s_base.json
- Edit k8s_base.json
and either:
- Hardcode a value in the job configuration (affects all runs and disables per-deployment overrides for this field):
"job_configuration": {
...
"pod_watch_timeout_seconds": 60,
...
}
- Or keep the templating and set a default in the variables section so you can still override per-deployment:
- Keep this in job_configuration:
"pod_watch_timeout_seconds": "{{ pod_watch_timeout_seconds }}"
- And add/update the default in `variables.properties.pod_watch_timeout_seconds.default`:
"variables": {
"properties": {
...
"pod_watch_timeout_seconds": {
"type": "integer",
"default": 60,
"description": "Timeout (seconds) for the Kubernetes pod watch stream"
}
}
}
- Apply the updated template to your work pool:
prefect work-pool update <your-pool-name> --base-job-template k8s_base.json
Tip: In the UI you can also set the default under Work Pools -> your Kubernetes pool -> Edit -> Variables -> Pod watch timeout seconds.
2) Set it per deployment (job_variables)
- When deploying in Python, pass `job_variables`:
from prefect import flow
@flow
def my_flow():
...
my_flow.from_source(
"<https://github.com/your-org/your-repo>",
entrypoint="flows/my_flow.py:my_flow",
).deploy(
name="my-deployment",
work_pool_name="my-k8s-pool",
job_variables={"pod_watch_timeout_seconds": 60},
)
Notes and references
- You can also set job_watch_timeout_seconds
if you need the Job-level watch timeout as well; both appear in the default template. Verified via:
prefect work-pool get-default-base-job-template --type kubernetes
which includes:
"job_configuration": {
...
"job_watch_timeout_seconds": "{{ job_watch_timeout_seconds }}",
"pod_watch_timeout_seconds": "{{ pod_watch_timeout_seconds }}",
"stream_output": "{{ stream_output }}"
}
- The timeout controls the Kubernetes watch stream used by the worker/observer, not the pod’s runtime limit. It ensures the watch exits in a timely manner after events stop flowing. See the upstream change that added this behavior: prefect-kubernetes PR #57
- Prefect 3.x worker docs: Workers and managing work pools: Manage work pools
If you’re not on Prefect 3.x or not using a Kubernetes work pool, let me know your setup and I’ll tailor the steps.Scott Pedersen
09/26/2025, 5:01 AMMarvin
09/26/2025, 5:01 AMMarvin
09/26/2025, 5:03 AMprefect work-pool get-default-base-job-template --type kubernetes > k8s_base.json
- Edit k8s_base.json
and set the value:
- Hardcode a global default:
"job_configuration": {
...
"pod_watch_timeout_seconds": 60,
...
}
- Or keep it templated and set a default in variables so you can still override per-deployment:
- Keep in job_configuration:
"pod_watch_timeout_seconds": "{{ pod_watch_timeout_seconds }}"
- Add a default under variables:
"variables": {
"properties": {
"pod_watch_timeout_seconds": {
"type": "integer",
"default": 60,
"description": "Timeout (seconds) for the Kubernetes pod watch stream"
}
}
}
- Apply it to your work pool:
prefect work-pool update <your-k8s-pool> --base-job-template k8s_base.json
2) Set it per deployment (job_variables)
- If you deploy with a prefect.yaml
, add under the deployment’s work_pool:
deployments:
- name: my-deployment
entrypoint: flows/my_flow.py:my_flow
work_pool:
name: <your-k8s-pool>
job_variables:
pod_watch_timeout_seconds: 60
- Then run:
prefect deploy
Notes
- The exact key is pod_watch_timeout_seconds
. You’ll also see job_watch_timeout_seconds
alongside it in the default template. Verified via:
prefect work-pool get-default-base-job-template --type kubernetes
- This controls how long the Kubernetes watch stream can sit idle before timing out; it does not limit pod runtime.
- Make sure your worker image has a recent prefect-kubernetes
installed so the setting is honored (this behavior was added here: prefect-kubernetes PR #57).
Quick check: are you running a Kubernetes worker/work pool (not the older agent + KubernetesJob infra)? If you’re on the older agent flow, this setting won’t apply—happy to help you migrate to workers on 2.20 or guide you to 3.x.Scott Pedersen
09/26/2025, 5:08 AMvalues.yml
file as we have base_job_config.json.tfpl
that we used to deploy using Terraform to update the work pool?Marvin
09/26/2025, 5:09 AMMarvin
09/26/2025, 5:09 AMbase_job_config.json.tfpl
). That’s the correct place. Putting it in a Helm values.yml
won’t affect the work pool unless your chart is explicitly pushing a base job template to Prefect’s API on startup.
Why
- pod_watch_timeout_seconds
is a field in the Kubernetes work pool’s base job template under job_configuration
. The worker just reads the rendered job configuration from the API; it doesn’t invent this value from its pod env. So Helm values for the worker won’t change it unless your chart also updates the work pool.
What to change in your Terraform base job template
- Recommended: keep the template placeholder and set a default in variables so you can still override per-deployment via job_variables
.
Example snippet for your `base_job_config.json.tfpl`:
{
"job_configuration": {
...
"pod_watch_timeout_seconds": "{{ pod_watch_timeout_seconds }}",
"job_watch_timeout_seconds": "{{ job_watch_timeout_seconds }}",
...
},
"variables": {
"properties": {
...
"pod_watch_timeout_seconds": {
"type": "integer",
"default": 60,
"description": "Timeout (seconds) for the Kubernetes pod watch stream"
},
"job_watch_timeout_seconds": {
"type": "integer",
"default": 60,
"description": "Timeout (seconds) for the Kubernetes job watch stream"
}
}
}
}
If you want a global hardcoded value instead (no per-deployment override), set it directly:
"job_configuration": {
...
"pod_watch_timeout_seconds": 60,
...
}
How to verify the field/key
- You can see both keys in Prefect’s default template:
prefect work-pool get-default-base-job-template --type kubernetes
You should see:
"job_configuration": {
...
"job_watch_timeout_seconds": "{{ job_watch_timeout_seconds }}",
"pod_watch_timeout_seconds": "{{ pod_watch_timeout_seconds }}",
"stream_output": "{{ stream_output }}"
}
Reference
- Behavior was added in prefect-kubernetes: prefect-kubernetes PR #57
If your Helm chart does push a work pool/base job template on install/upgrade, you can include the same JSON there. If you can share which chart/values you’re using, I can confirm the exact key path for you.Scott Pedersen
09/26/2025, 5:14 AMMarvin
09/26/2025, 5:14 AMMarvin
09/26/2025, 5:15 AMpod_watch_timeout_seconds
inside the work pool’s base job template JSON that the chart applies.
Two ways with prefect-helm
1) Inline JSON in values.yaml
- Add the base job template JSON under worker.config.baseJobTemplate.configuration
. Keep the templating and set a default in variables so you can still override per-deployment via `job_variables`:
worker:
apiConfig: server
config:
workPool: my-k8s-pool
type: kubernetes
baseJobTemplate:
configuration: |
{
"job_configuration": {
"command": "{{ command }}",
"env": "{{ env }}",
"labels": "{{ labels }}",
"name": "{{ name }}",
"namespace": "{{ namespace }}",
"job_manifest": {
"apiVersion": "batch/v1",
"kind": "Job",
"metadata": {
"generateName": "{{ name }}-",
"labels": "{{ labels }}",
"namespace": "{{ namespace }}"
},
"spec": {
"backoffLimit": 0,
"template": {
"spec": {
"completions": 1,
"containers": [
{
"args": "{{ command }}",
"env": "{{ env }}",
"image": "{{ image }}",
"imagePullPolicy": "{{ image_pull_policy }}",
"name": "prefect-job"
}
],
"parallelism": 1,
"restartPolicy": "Never",
"serviceAccountName": "{{ service_account_name }}"
}
},
"ttlSecondsAfterFinished": "{{ finished_job_ttl }}"
}
},
"cluster_config": "{{ cluster_config }}",
"job_watch_timeout_seconds": "{{ job_watch_timeout_seconds }}",
"pod_watch_timeout_seconds": "{{ pod_watch_timeout_seconds }}",
"stream_output": "{{ stream_output }}"
},
"variables": {
"properties": {
"namespace": { "type": "string", "default": "default" },
"pod_watch_timeout_seconds": {
"type": "integer",
"default": 60,
"description": "Timeout (seconds) for the Kubernetes pod watch stream"
}
}
}
}
2) Provide the JSON file at install/upgrade time
- Keep your JSON template as a file (e.g., the Terraform-generated base_job_config.json.tfpl
output).
- Pass it to Helm using `--set-file`:
helm upgrade --install prefect-worker prefect/prefect-worker \
-n <ns> \
-f values.yaml \
--set worker.config.workPool=my-k8s-pool \
--set worker.config.type=kubernetes \
--set-file worker.config.baseJobTemplate.configuration=base-job-template.json
Optionally: use an existing ConfigMap
- The chart supports referencing a ConfigMap for the base job template (see “Allow an existing ConfigMap to be passed for baseJobTemplate”):
- Create a ConfigMap containing your JSON
- Reference it in values under worker.config.baseJobTemplate
(keys vary slightly by chart version; look for existingConfigMap
+ `name`/`key`).
Important notes
- This updates the work pool’s base job template via the worker chart. If you’re also using Terraform to update the same work pool/template, pick a single source of truth (Helm or Terraform) to avoid clobbering.
- pod_watch_timeout_seconds
is already present in the default template; you just need to set it (either hardcoded in job_configuration
or as a default in variables
so deployments can override).
- Prefect Helm chart: https://github.com/PrefectHQ/prefect-helmMarvin
09/26/2025, 5:15 AM