Yu Shao Pang
09/08/2025, 5:13 AMMarvin
09/08/2025, 5:13 AMMarvin
09/08/2025, 5:19 AMprefect work-pool get-default-base-job-template --type kubernetes > k8s_base.json
2) Edit k8s_base.json to add the secret mounts
Find the container spec at:
job_configuration.job_manifest.spec.template.spec.containers[0]
Add whichever secret pattern you want:
- All keys as env vars via envFrom
"envFrom": [
{
"secretRef": { "name": "my-app-secrets" }
}
]
- Specific keys as env vars via valueFrom.secretKeyRef
(replace or supplement the default "env": "{{ env }}")
"env": [
{
"name": "DB_USER",
"valueFrom": { "secretKeyRef": { "name": "db-secret", "key": "username" } }
},
{
"name": "DB_PASS",
"valueFrom": { "secretKeyRef": { "name": "db-secret", "key": "password" } }
}
// Optionally include any non-secret envs via deployment "env" by merging,
// or keep those here as explicit entries.
]
- Mount secrets as files via volumes + volumeMounts
Add to pod spec (same level as containers) and to the container:
"volumes": [
{
"name": "creds",
"secret": { "secretName": "my-app-secrets" }
}
],
"containers": [
{
"name": "prefect-job",
"image": "{{ image }}",
"args": "{{ command }}",
"env": "{{ env }}",
"imagePullPolicy": "{{ image_pull_policy }}",
"volumeMounts": [
{ "name": "creds", "mountPath": "/var/run/creds", "readOnly": true }
]
}
]
- Combine multiple secrets via a projected volume
"volumes": [
{
"name": "combo-secrets",
"projected": {
"sources": [
{ "secret": { "name": "db-secret" } },
{ "secret": { "name": "api-secret" } }
]
}
}
],
"containers": [
{
"name": "prefect-job",
"image": "{{ image }}",
"args": "{{ command }}",
"env": "{{ env }}",
"imagePullPolicy": "{{ image_pull_policy }}",
"volumeMounts": [
{ "name": "combo-secrets", "mountPath": "/var/run/secrets", "readOnly": true }
]
}
]
Notes:
- Keep "namespace": "{{ namespace }}" so your jobs run in the same namespace where your static secrets live.
- The default template includes "env": "{{ env }}". That placeholder renders envs from your deployment job_variables. If you also need secretKeyRef envs, either:
- Replace "env" with a fully explicit list (as shown), or
- Use envFrom to load the whole secret as env vars and keep "{{ env }}" for your non-secret envs defined per-deployment.
3) Update your work pool with the edited template
prefect work-pool update "<your-pool-name>" --base-job-template k8s_base.json
4) Continue deploying as usual
- Your CI can keep setting image via prefect.yml’s work_pool.job_variables.image (or via your deployment code). The template handles the rest.
- Secrets must already exist in the namespace; Prefect will not create them.
Per-deployment vs. per-pool
- If all jobs in a pool need the same secret mounts, put them in the pool base job template (recommended).
- If only some deployments need them, consider:
- Creating a separate pool with a different base template, or
- Exposing additional Jinja variables in your template (advanced) and passing them via deployment job_variables.Marvin
09/08/2025, 5:19 AMprefect work-pool get-default-base-job-template --type kubernetes
- Update pool with template: prefect work-pool update "<your-pool-name>" --base-job-template k8s_base.json
Docs
- Kubernetes worker guide
- Customize job variables
- Workers concept
Quick question to tailor the example: do you want the secrets as environment variables, as files, or a mix of both? And do you need this at the pool level for all jobs or only for a particular deployment?Yu Shao Pang
09/08/2025, 5:21 AMMarvin
09/08/2025, 5:21 AMMarvin
09/08/2025, 5:22 AM{
"job_configuration": {
"name": "{{ name }}",
"namespace": "{{ namespace }}",
"labels": "{{ labels }}",
"env": "{{ env }}",
"command": "{{ command }}",
"job_manifest": {
"apiVersion": "batch/v1",
"kind": "Job",
"metadata": {
"generateName": "{{ name }}-",
"labels": "{{ labels }}",
"namespace": "{{ namespace }}"
},
"spec": {
"backoffLimit": 0,
"template": {
"spec": {
"restartPolicy": "Never",
"serviceAccountName": "{{ service_account_name }}",
"volumes": {{ volumes | default([]) }},
"containers": [
{
"name": "prefect-job",
"image": "{{ image }}",
"imagePullPolicy": "{{ image_pull_policy }}",
"args": "{{ command }}",
"env": {{ container_env | default([]) }},
"envFrom": {{ env_from | default([]) }},
"volumeMounts": {{ volume_mounts | default([]) }}
}
]
}
},
"ttlSecondsAfterFinished": "{{ finished_job_ttl }}"
}
},
"cluster_config": "{{ cluster_config }}",
"stream_output": "{{ stream_output }}",
"job_watch_timeout_seconds": "{{ job_watch_timeout_seconds }}",
"pod_watch_timeout_seconds": "{{ pod_watch_timeout_seconds }}"
},
"variables": {
"properties": {
"name": { "type": ["string","null"], "default": null },
"namespace": { "type": ["string","null"], "default": null },
"labels": { "type": ["object","null"], "default": null, "additionalProperties": { "type": "string" } },
"image": { "type": ["string","null"] },
"image_pull_policy": { "type": ["string","null"], "default": "IfNotPresent" },
"service_account_name": { "type": ["string","null"] },
"finished_job_ttl": { "type": ["integer","string","null"] },
"cluster_config": { "type": ["object","null"] },
"stream_output": { "type": ["boolean","null"] },
"job_watch_timeout_seconds": { "type": ["integer","null"] },
"pod_watch_timeout_seconds": { "type": ["integer","null"] },
"env": { "type": ["object","null"], "default": null, "additionalProperties": { "type": ["string","null"] } },
"container_env": { "type": ["array","null"], "default": null, "items": { "type": "object" } },
"env_from": { "type": ["array","null"], "default": null, "items": { "type": "object" } },
"volumes": { "type": ["array","null"], "default": null, "items": { "type": "object" } },
"volume_mounts": { "type": ["array","null"], "default": null, "items": { "type": "object" } },
"command": { "type": ["array","null"], "items": { "type": "string" } }
}
}
}
2) Apply it automatically:
- CI step:
prefect work-pool update "<your-pool>" --base-job-template base_job_template.json
- Or Helm pre-install/upgrade Job (pattern):
- Mount the JSON as a ConfigMap
- Run a Job using an image with Prefect CLI and your Prefect API credentials
- Command:
```Marvin
09/08/2025, 5:22 AMThis keeps the pool template in sync on every Helm release.
Option B: Drive values per-deployment in prefect.yaml
Once your base template exposes placeholders like env_from, volumes, volume_mounts, you can set them per-deployment in prefect.yaml:
deployments:
- name: my-flow
entrypoint: flows/my_flow.py:flow
work_pool:
name: <your-pool>
job_variables:
image: "registry/gitlab/my-image:{{ git_sha }}"
namespace: "my-namespace"
# Option 1: all keys from a secret as env
env_from:
- secretRef:
name: my-app-secrets
# Option 2: specific key envs
container_env:
- name: DB_USER
valueFrom:
secretKeyRef:
name: db-secret
key: username
- name: DB_PASS
valueFrom:
secretKeyRef:
name: db-secret
key: password
# Files via volumes + mounts
volumes:
- name: creds
secret:
secretName: my-app-secrets
volume_mounts:
- name: creds
mountPath: /var/run/creds
readOnly: true
# Or projected multiple secrets
# volumes:
# - name: combo
# projected:
# sources:
# - secret: { name: db-secret }
# - secret: { name: api-secret }
# volume_mounts:
# - name: combo
# mountPath: /var/run/secrets
# readOnly: true
Notes and gotchas
- You only need to set the base job template once (or keep it updated via CI/Helm). After that, it’s declarative and parameterized via prefect.yaml job_variables.
- Keys in the pod spec are standard Kubernetes (camelCase) because you’re literally supplying a Kubernetes Job manifest in the template.
- The variables section in your base template is important: it declares and validates which job_variables are allowed. Add new properties there when you introduce new placeholders (env_from, volumes, etc.).
- If all deployments use the same mounts, set static values directly in the base template instead of exposing placeholders.
- Secrets must already exist in the target namespace; Prefect won’t create them.
CLI you’ll use (verified)
- Get default template to start from:
prefect work-pool get-default-base-job-template --type kubernetes > k8s_base.json
- Update pool with your custom template:
prefect work-pool update "<your-pool>" --base-job-template base_job_template.json
```
Docs
- Kubernetes worker guide: Kubernetes worker
- Customize job variables (how templating works): Customize job variables
- Workers concept: Workers concept
If you share whether you prefer CI or Helm-based bootstrapping, I can provide a concrete Helm Job manifest or CI step example wired to your GitLab pipeline.Yu Shao Pang
09/08/2025, 6:03 AMMarvin
09/08/2025, 6:03 AMMarvin
09/08/2025, 6:04 AMprefect work-pool get-default-base-job-template --type kubernetes > base_job_template.json
Open base_job_template.json
and insert your secret mounts:
- Under job_configuration.job_manifest.spec.template.spec
, add a volumes
entry for your secret
- Under ...containers[0]
, add volumeMounts
(and envFrom
if you want all keys as env-vars)
Example: add these blocks to the default template
...
"spec": {
"backoffLimit": 0,
"template": {
"spec": {
"completions": 1,
"volumes": [
{
"name": "creds",
"secret": {
"secretName": "my-app-secrets"
}
}
],
"containers": [
{
"args": "{{ command }}",
"env": "{{ env }}",
"envFrom": [
{
"secretRef": { "name": "my-app-secrets" }
}
],
"image": "{{ image }}",
"imagePullPolicy": "{{ image_pull_policy }}",
"name": "prefect-job",
"volumeMounts": [
{ "name": "creds", "mountPath": "/var/run/creds", "readOnly": true }
]
}
],
"parallelism": 1,
"restartPolicy": "Never",
"serviceAccountName": "{{ service_account_name }}"
}
},
"ttlSecondsAfterFinished": "{{ finished_job_ttl }}"
}
...
Notes:
- Keep namespace: "{{ namespace }}"
so the job runs in the same namespace where your static secrets exist.
- You can remove envFrom
if you only want file mounts; keep one or both as needed.
- Start from the default JSON and only insert the shown blocks—this preserves the default variables
schema.
Step 2: Add the template to your Helm chart
Put the edited file at charts/your-chart/files/base_job_template.json
.
Create a ConfigMap that ships it:
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "yourchart.fullname" . }}-pool-template
data:
base_job_template.json: |
{{ .Files.Get "files/base_job_template.json" | indent 4 }}
Step 3: Bootstrap/update the work pool via a Helm hook Job
This Job runs the Prefect CLI to create/update the pool with your template whenever you install/upgrade.
```
apiVersion: batch/v1
kind: Job
metadata:
name: {{ include "yourchart.fullname" . }}-pool-bootstrap
annotations:
"helm.sh/hook": pre-install,pre-upgrade
"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
spec:
template:
spec:
serviceAccountName: {{ .Values.prefect.bootstrap.serviceAccountName | default "default" }}
restartPolicy: Never
containers:
- name: apply-template
image: prefecthq/prefect:3-latest
imagePullPolicy: IfNotPresent
env:
# For Prefect Cloud: set PREFECT_API_URL to workspace-scoped URL and provide PREFECT_API_KEY
# Example:
# - name: PREFECT_API_URL
# value: "https://api.prefect.cloud/api/accounts/<ACCOUNT_ID>/workspaces/<WORKSPACE_ID>"
# - name: PREFECT_API_KEY
# valueFrom:
# secretKeyRef:
# name: {{ .Values.prefect.bootstrap.credentialsSecret }}
# key: PREFECT_API_KEY
#
# For Prefect Server (self-hosted): only PREFECT_API_URL is needed, no key.
- name: PREFECT_API_URL
value: {{ .Values.prefect.apiUrl | quote }}
volumeMounts:Marvin
09/08/2025, 6:04 AMExample values
prefect:
workPoolName: "k8s-pool-with-secrets"
# Cloud: set workspace-scoped URL and provide a Secret with PREFECT_API_KEY
# apiUrl: "https://api.prefect.cloud/api/accounts/<ACCOUNT_ID>/workspaces/<WORKSPACE_ID>"
# bootstrap:
# credentialsSecret: "prefect-api-credentials"
#
# Server:
apiUrl: "http://prefect-server.api.svc.cluster.local/api"
bootstrap:
serviceAccountName: "default"
```
Step 4: Ensure your worker uses this pool
Your existing worker (deployed via Helm) should reference the same workPoolName
. New jobs will pick up the updated template automatically—no worker restart needed.
That’s it—this keeps everything declarative:
- Secrets are hardcoded in the template (volume + envFrom)
- The template is versioned with your chart
- Helm applies it on every install/upgrade so it never drifts
Docs for reference
- Kubernetes worker
- Customize job variables
- Workers concept
If you share your pool name and whether you’re on Prefect Cloud or Server, I can tailor the Job’s env and provide a ready-to-paste chart snippet with the correct API URL and credential handling.