<@ULVA73B9P> What default variables are set for wo...
# ask-marvin
j
@Marvin What default variables are set for work pool job templates?
m
thought for 641.9 seconds
Assuming Prefect 3.x: work pool base job templates are rendered with a small, consistent Jinja context. You can reference the following default variables in fields like
command
,
env
, etc. - flow_run: metadata for the current flow run (id, name, tags, parameters, scheduled/expected start time, run_count, parent/root ids, etc.) - deployment: metadata for the deployment (id, name, tags, version, entrypoint/source info, etc.) - flow: metadata for the flow (id, name, version, tags) - parameters: a convenience alias for
flow_run.parameters
- scheduled_start_time: convenience alias for the run’s scheduled/expected start time (ISO timestamp) - log_level: resolved log level for the run Notes: - There is not a default “results” variable in job templates. - These keys are available across all worker types; some integrations may add worker-specific variables, but the above are the core defaults. Example usage in a
command
field:
Copy code
uv run --with "prefect" python -m prefect.engine \
  --flow-run-id "{{ flow_run.id }}" \
  --log-level "{{ log_level }}" \
  --start-time "{{ scheduled_start_time }}"
Good references: - Manage work pools and job templates - Developing a custom worker - Runtime flow_run docs (shape of the flow run object you’ll see in templates) If you’re looking for a specific field (e.g., a particular deployment attribute) just let me know and I can confirm whether it’s present in the template context and show an example.
j
@Marvin I'm getting this error: Value error, The variables specified in the job configuration template must be present as properties in the variables schema. Your job configuration uses the following undeclared variable(s): flow_run.name ,deployment.name.
m
thought for 98.7 seconds
Thanks for sharing the error — that message usually means the Jinja placeholders you’re using are being validated against the work pool’s “job variables” schema, and Prefect is treating
flow_run.name
and
deployment.name
as variables that must be declared there. A couple key points: - Built-in runtime variables like
flow_run
,
deployment
,
flow
,
parameters
,
scheduled_start_time
, and
log_level
are available when rendering the base job template. You should reference these directly in the job_configuration fields (e.g., command, env) and they do not need to be declared in your variables schema. - The variables schema only needs to declare user-defined variables (e.g., image, cpu, memory, extra_args). If you reference a variable like
{{ my_var }}
in the job configuration, it must be declared in the variables schema. Why you’re seeing this error - Most often this happens if
{{ flow_run.name }}
or
{{ deployment.name }}
are used inside the variables schema defaults or inside a user-defined variable’s default value. Those fields are validated as “variables,” so the validator expects every placeholder to be a property you defined in the schema — built-ins are not allowed there. - It can also happen if you’ve accidentally put placeholders under a
variables.
path in the job config. How to fix - Use built-ins only in the job_configuration section, not in variable defaults. For example:
Copy code
variables:
  type: object
  properties:
    env:
      type: object
      default: {}
    extra_args:
      type: array
      items: { type: string }
      default: []
job_configuration:
  command: >
    uv run --with "prefect" python -m prefect.engine
    --flow-run-id "{{ flow_run.id }}"
    --log-level "{{ log_level }}"
  env: "{{ env }}"
- If you actually want a user-defined variable, declare it and use that name in the template:
Copy code
variables:
  type: object
  properties:
    run_name:
      type: string
      default: ""
job_configuration:
  # If you want the Prefect run name, don’t make it a user variable—just use the built-in:
  # command: "... {{ flow_run.name }} ..."
  # If you want to pass a custom value, reference the user variable:
  env:
    RUN_NAME: "{{ run_name }}"
If you paste your current base job template (both the
variables
section and the
job_configuration
section), I can point to the exact line causing the validation failure and suggest the minimal change. References: - Manage work pools and job templates: https://docs-3.prefect.io/v3/how-to-guides/deployment_infra/manage-work-pools - Developing a custom worker: https://docs-3.prefect.io/v3/advanced/developing-a-custom-worker