Romain Lefebvre
10/13/2025, 8:08 AMuv run prefect work-pool update --base-job-template template.json
, it told me : Updated work pool 'toto'
And when I go into the UI, I can see that work pool has been updated with the last updated
field. But the base job template haven't been updated.
Here's my template.json :
{
"job_configuration": {
"command": "uv run prefect flow-run execute",
"env": "{{ env }}",
"labels": "{{ labels }}",
"name": "{{ name }}",
"namespace": "{{ namespace }}",
"job_manifest": {
"apiVersion": "batch/v1",
"kind": "Job",
"metadata": {
"generateName": "{{ name }}-",
"labels": "{{ labels }}",
"namespace": "{{ namespace }}"
},
"spec": {
"backoffLimit": 0,
"template": {
"spec": {
"completions": 1,
"containers": [
{
"args": "{{ command }}",
"env": "{{ env }}",
"image": "prefect-k8s",
"imagePullPolicy": "Never",
"name": "prefect-job"
}
],
"parallelism": 1,
"restartPolicy": "Never",
"serviceAccountName": "1"
}
},
"ttlSecondsAfterFinished": "60"
}
},
"cluster_config": "{{ cluster_config }}",
"job_watch_timeout_seconds": "{{ job_watch_timeout_seconds }}",
"pod_watch_timeout_seconds": "{{ pod_watch_timeout_seconds }}",
"stream_output": "{{ stream_output }}"
},
"variables": {
"description": "Default variables for the Kubernetes worker.\n\nThe schema for this class is used to populate the `variables` section of the default\nbase job template.",
"properties": {
"name": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Name given to infrastructure created by a worker.",
"title": "Name"
},
"env": {
"additionalProperties": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
]
},
"description": "Environment variables to set when starting a flow run.",
"title": "Environment Variables",
"type": "object"
},
"labels": {
"additionalProperties": {
"type": "string"
},
"description": "Labels applied to infrastructure created by a worker.",
"title": "Labels",
"type": "object"
},
"command": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "The command to use when starting a flow run. In most cases, this should be left blank and the command will be automatically generated by the worker.",
"title": "Command"
},
"namespace": {
"default": "default",
"description": "The Kubernetes namespace to create jobs within.",
"title": "Namespace",
"type": "string"
},
"image": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "The image reference of a container image to use for created jobs. If not set, the latest Prefect image will be used.",
"examples": [
"<http://docker.io/prefecthq/prefect:3-latest|docker.io/prefecthq/prefect:3-latest>"
],
"title": "Image"
},
"service_account_name": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "The Kubernetes service account to use for job creation.",
"title": "Service Account Name"
},
"image_pull_policy": {
"default": "IfNotPresent",
"description": "The Kubernetes image pull policy to use for job containers.",
"enum": [
"IfNotPresent",
"Always",
"Never"
],
"title": "Image Pull Policy",
"type": "string"
},
"finished_job_ttl": {
"anyOf": [
{
"type": "integer"
},
{
"type": "null"
}
],
"default": null,
"description": "The number of seconds to retain jobs after completion. If set, finished jobs will be cleaned up by Kubernetes after the given delay. If not set, jobs will be retained indefinitely.",
"title": "Finished Job TTL"
},
"job_watch_timeout_seconds": {
"anyOf": [
{
"type": "integer"
},
{
"type": "null"
}
],
"default": null,
"description": "Number of seconds to wait for each event emitted by a job before timing out. If not set, the worker will wait for each event indefinitely.",
"title": "Job Watch Timeout Seconds"
},
"pod_watch_timeout_seconds": {
"default": 60,
"description": "Number of seconds to watch for pod creation before timing out.",
"title": "Pod Watch Timeout Seconds",
"type": "integer"
},
"stream_output": {
"default": true,
"description": "If set, output will be streamed from the job to local standard output.",
"title": "Stream Output",
"type": "boolean"
},
"cluster_config": {
"anyOf": [
{
"$ref": "#/definitions/KubernetesClusterConfig"
},
{
"type": "null"
}
],
"default": null,
"description": "The Kubernetes cluster config to use for job creation."
}
},
"type": "object",
"definitions": {
"KubernetesClusterConfig": {
"block_schema_references": {},
"block_type_slug": "kubernetes-cluster-config",
"description": "Stores configuration for interaction with Kubernetes clusters.\n\nSee `from_file` for creation.",
"properties": {
"config": {
"description": "The entire contents of a kubectl config file.",
"title": "Config",
"type": "object"
},
"context_name": {
"description": "The name of the kubectl context to use.",
"title": "Context Name",
"type": "string"
}
},
"required": [
"config",
"context_name"
],
"secret_fields": [],
"title": "KubernetesClusterConfig",
"type": "object"
}
}
}
}
As you can see, I've updated the command
, image
and imagePullPolicy
with custom values but they are not applied. Do you know why ?Marvin
10/13/2025, 8:08 AMuv run prefect work-pool update --base-job-template template json
it told me Updated work pool 'toto'
And when I go into the UI I can see that work pool has been updated with the last updated
field But the base job template haven't been updated Here's my template json ```{ job_configuration { command uv run prefect flow-run execute env {{ env }} labels {{ labels }} name {{ name }} namespace {{ namespace }} job_manifest { apiVersion batch/v1 kind Job metadata { generateName {{ name }}- labels {{ labels }} namespace {{ namespace }} } spec { backoffLimit 0 template { spec { completions 1 containers [ { args {{ command }} env {{ env }} image prefect-k8s imagePullPolicy Never name prefect-job } ] parallelism 1 restartPolicy Never serviceAccountName 1 } } ttlSecondsAfterFinished 60 } } cluster_config {{ cluster_config }} job_watch_timeout_seconds {{ job_watch_timeout_seconds }} pod_watch_timeout_seconds {{ pod_watch_timeout_seconds }} stream_output {{ stream_output }} } variables { description Default variables for the Kubernetes worker \n\nThe schema for this class is used to populate the variables
section of the default\nbase job template properties { name { anyOf [ { type string } { type null } ] default null description Name given to infrastructure created by a worker title Name } env { additionalProperties { anyOf [ { type string } { type null } ] } description Environment variables to set when starting a flow run title Environment Variables type object } labels { additionalProperties { type string } description Labels applied to infrastructure created by a worker title Labels type object } command { anyOf [ { type string } { type null } ] default null description The command to use when starting a flow run In most cases this should be left blank and the command will be automatically generated by the worker title Command } namespace { default default description The Kubernetes namespace to create jobs within title Namespace type string } image { anyOf [ { type string } { type null } ] default null description The image reference of a container image to use for created jobs If not set the latest Prefect image will be used examples [ <http //docker io/prefecthq/prefect 3-latest|docker io/prefecthq/prefect 3-latest> ] title Image } service_account_name { anyOf [ { type string } { type null } ] default null description The Kubernetes service account to use for job creation title Service Account Name } image_pull_policy { default IfNotPresent description The Kubernetes image pull policy to use for job containers enum [ IfNotPresent Always Never ] title Image Pull Policy type string } finished_job_ttl { anyOf [ { type integer } { type null } ] default null description The number of seconds to retain jobs after completion If set finished jobs will be cleaned up by Kubernetes after theMarvin
10/13/2025, 8:09 AMMarvin
10/13/2025, 8:15 AM{{ image }}
, {{ command }}
, and {{ image_pull_policy }}
.
- At run time, the worker renders that template using values provided by the deployment’s job variables. Deployment-level values take precedence over the pool defaults.
- If your deployments already specify job_variables.image
, job_variables.command
, or job_variables.image_pull_policy
, they will override your pool template changes.
- The UI’s “Base job template” view is not per-deployment; it shows the template (often with placeholders), not a concrete, resolved job spec.
How to verify what’s persisted
- Inspect the pool to confirm the template that’s currently stored:
prefect work-pool inspect toto
- Then check a deployment using this pool and see its job variables (UI > Deployments > your deployment > Job variables). If image
, command
, or image_pull_policy
are set there, they will win over the pool defaults.
Ways to make your changes take effect
1) Set defaults at the pool level (recommended for global defaults across deployments)
- Keep the placeholders in the job_configuration section and define defaults in the template’s variables schema.
- Example skeleton for a Kubernetes pool template (partial):
{
"job_configuration": {
"job": {
"spec": {
"template": {
"spec": {
"containers": [
{
"name": "prefect-job",
"image": "{{ image }}",
"imagePullPolicy": "{{ image_pull_policy }}",
"command": {{ command }}
}
]
}
}
}
}
},
"variables": {
"properties": {
"image": { "type": "string", "default": "your-registry/your-image:tag" },
"image_pull_policy": { "type": "string", "default": "IfNotPresent" },
"command": {
"type": "array",
"items": { "type": "string" },
"default": ["python", "-m", "prefect.engine"]
}
},
"required": []
}
}
- Note the variable names are snake_case in placeholders: {{ image }}
, {{ image_pull_policy }}
, {{ command }}
. Using {{ imagePullPolicy }}
will not resolve.
2) Set per-deployment values (overrides pool defaults)
- In prefect.yaml:
deployments:
- name: my-deployment
work_pool:
name: toto
job_variables:
image: "your-registry/your-image:tag"
image_pull_policy: "IfNotPresent"
command: ["python", "-m", "prefect.engine"]
- Or in Python:
flow.from_source("<https://github.com/org/repo>", entrypoint="flows/my_flow.py:flow").deploy(
name="my-deployment",
work_pool_name="toto",
job_variables={
"image": "your-registry/your-image:tag",
"image_pull_policy": "IfNotPresent",
"command": ["python", "-m", "prefect.engine"],
},
)
Common gotchas
- Command vs args: Prefect’s Kubernetes worker sets container args to run the flow. Overriding command
is advanced and may not behave as expected unless you know the entrypoint. Prefer leaving command
to the default unless you really need to change it.
- Variable names: The K8s field is imagePullPolicy
, but the template variable must be {{ image_pull_policy }}
. If your template uses the wrong placeholder name, it won’t be filled.
- Existing deployments: Updating the pool template does not retroactively change deployment job variables; redeploy with new job_variables
or remove them so the deployment inherits the pool defaults.
Next steps
- Can you run this and share the “base_job_template” section it prints?
prefect work-pool inspect toto
- And for one affected deployment, share the current job variables. That will let us confirm where the override is happening.
Docs for referenceMarvin
10/13/2025, 8:15 AMRomain Lefebvre
10/13/2025, 8:20 AMjob_configuration has been updated with the right values
. But do I need to update the variables
section ? what's the difference between both ?Marvin
10/13/2025, 8:20 AMMarvin
10/13/2025, 8:20 AM{{ image }}
, {{ command }}
, {{ image_pull_policy }}
.
- variables: A JSON Schema that defines the placeholders used in job_configuration — their names, types, and optional defaults. These defaults are used when a deployment does not supply a value and they also power the UI form and validation.
Do you need to update variables?
- If you hardcode literal values in job_configuration (e.g., "image": "my-registry/my-image:tag"
), you do not need variables for those fields — but any deployment that has job_variables.image
set will still override your hardcoded value.
- If you use placeholders in job_configuration (recommended), you should define them in variables and set defaults if you want pool-level defaults. Then deployments can override them when needed.
Precedence at runtime
1) Deployment job_variables override pool defaults.
2) If a deployment does not set a value, the pool’s variables.default (from the base job template) is used.
3) If neither is provided, rendering may fall back to worker defaults or fail if required.
Example: recommended pattern for pool-level defaults
{
"job_configuration": {
"job": {
"apiVersion": "batch/v1",
"kind": "Job",
"spec": {
"template": {
"spec": {
"restartPolicy": "Never",
"containers": [
{
"name": "prefect-job",
"image": "{{ image }}",
"imagePullPolicy": "{{ image_pull_policy }}",
"command": {{ command }}
}
]
}
}
}
}
},
"variables": {
"type": "object",
"properties": {
"image": { "type": "string", "default": "your-registry/your-image:tag" },
"image_pull_policy": { "type": "string", "default": "IfNotPresent" },
"command": {
"type": "array",
"items": { "type": "string" },
"default": ["python", "-m", "prefect.engine"]
}
}
}
}
Notes:
- Variable names in placeholders are snake_case (e.g., {{ image_pull_policy }}
) even though the K8s field is imagePullPolicy
.
- Overriding command
is advanced; the Kubernetes worker sets container args to run your flow. Changing command
can break execution unless you know what you’re doing. Often you only need to change image
and image_pull_policy
.
What to do next
- If you want your new values to be the default for all deployments using this pool:
- Keep placeholders in job_configuration.
- Add/update defaults for those variables in the variables section.
- Ensure your deployments do not set their own job_variables for those fields, or update/remove them.
- If you want to lock values and prevent overrides, you can hardcode them in job_configuration and remove the placeholders — but be aware existing deployments with job_variables set will still override them.
Helpful checks
- Inspect the pool to confirm what’s stored:
prefect work-pool inspect toto
- Check a deployment’s job_variables (UI > Deployments > your deployment > Job variables). If image
, command
, or image_pull_policy
are set there, they will override your pool template.
Docs
- Manage work pools: Manage work pools
- Kubernetes work pools: Kubernetes work pools
- Customize job variables: Customize job variables
If you can share:
- The relevant snippet of your pool’s base job template (job_configuration and variables), and
- One affected deployment’s job_variables,
I can point out exactly where the override is happening and suggest the minimal change.