Kate
08/20/2025, 12:53 PMMarvin
08/20/2025, 12:53 PMMarvin
08/20/2025, 12:57 PMprefect work-pool get-default-base-job-template --type kubernetes --file base-template.json
- Edit base-template.json
and set the default image under the variables section (e.g., variables.properties.image.default
) or hardcode it in the job manifest (replace {{ image }}
).
- Create or overwrite the pool with that template:
prefect work-pool create "my-k8s-pool" --type kubernetes --base-job-template base-template.json
# If the pool already exists, add --overwrite
Option B — set the image per deployment (recommended for flexibility)
- In Prefect 3.x, pass job_variables
when deploying:
from prefect import flow
@flow
def my_flow():
print("hello")
my_flow.from_source(
source="<https://github.com/you/your-repo>",
entrypoint="flows/my_flow.py:my_flow",
).deploy(
name="prod",
work_pool_name="my-k8s-pool",
job_variables={
"image": "<http://ghcr.io/you/your-image:tag|ghcr.io/you/your-image:tag>",
"image_pull_policy": "IfNotPresent",
"namespace": "your-namespace",
"service_account_name": "prefect-worker",
},
)
Notes for flow-run images:
- The image must contain your flow’s runtime dependencies. If you’re using from_source
, include git and any build tools your install requires.
- For private registries, add imagePullSecrets
to your base job template’s pod spec and make sure your service account can pull it.
- The worker’s image does not affect the image of your flow-run pods; that’s controlled by the work pool/deployment config above.
2) Use a pre-built image for the worker pod itself
- If you installed the worker via Helm, set the image in your values:
image:
repository: <http://ghcr.io/you/prefect-worker|ghcr.io/you/prefect-worker>
tag: 3.0.10
pullPolicy: IfNotPresent
- Or, if you run your own Deployment, set it there:
apiVersion: apps/v1
kind: Deployment
metadata:
name: prefect-worker
spec:
replicas: 1
selector:
matchLabels:
app: prefect-worker
template:
metadata:
labels:
app: prefect-worker
spec:
serviceAccountName: prefect-worker
containers:
- name: worker
image: <http://ghcr.io/you/prefect-worker:3.0.10|ghcr.io/you/prefect-worker:3.0.10>
args: ["prefect", "worker", "start", "-p", "my-k8s-pool"]
env:
- name: PREFECT_API_URL
value: "<http://prefect-server/api>"
# PREFECT_API_KEY if you use auth, etc.
Quick sanity checks
- Your work pool should be type `kubernetes`:
prefect work-pool ls
- You can preview the default K8s job template:
prefect work-pool get-default-base-job-template --type kubernetes
If you tell me whether you meant the flow-run pods or the worker pod, I’ll tailor this to your setup and provide a minimal, copy-pasteable config. I live to serve. Begrudgingly.Kate
08/20/2025, 1:04 PMMarvin
08/20/2025, 1:04 PMMarvin
08/20/2025, 1:05 PMdeployments:
- name: my-flow
entrypoint: flows/my_flow.py:my_flow
parameters: {}
work_pool:
name: my-k8s-pool # must be a Kubernetes work pool
job_variables:
image: <http://ghcr.io/you/your-image:tag|ghcr.io/you/your-image:tag>
image_pull_policy: IfNotPresent
namespace: your-namespace
service_account_name: prefect-worker
env:
PREFECT_LOGGING_LEVEL: INFO
tags: ["prod"]
schedules: []
Notes
- The image must include your flow’s runtime deps and Prefect.
- The worker image does not affect the pod image; the image
here controls your flow-run pods.
- For private registries, add imagePullSecrets at the work pool level (not per deployment). Fetch the base template, add it, then re-create/update the pool:
prefect work-pool get-default-base-job-template --type kubernetes --file base-template.json
# Edit base-template.json to add, e.g.:
# spec.template.spec.imagePullSecrets: [{ "name": "regcred" }]
prefect work-pool create "my-k8s-pool" --type kubernetes --base-job-template base-template.json --overwrite
- If you prefer setting the image globally for all deployments in the pool, you can hardcode it in that same base job template (replace {{ image }}
with your image).
If you share your current YAML, I’ll tweak it to be exactly right. Otherwise, apply your deployments the way you do today and your runs should launch with the specified image. Try not to look surprised when it works.Kate
08/20/2025, 1:40 PMMarvin
08/20/2025, 1:40 PMMarvin
08/20/2025, 1:41 PMcurl -s "$PREFECT_API_URL/version"
# example output: {"version":"3.0.10"}
- If you don’t have external access, port-forward to the API service first:
kubectl -n <namespace> port-forward svc/<api-service-name> 4200:4200
curl -s <http://localhost:4200/api/version>
Option B — Read the API pod’s image tag (works even if the API is firewalled)
- The image tag usually matches the Prefect server version:
kubectl -n <namespace> get deploy \
-l <http://app.kubernetes.io/component=api|app.kubernetes.io/component=api> \
-o jsonpath='{.items[0].spec.template.spec.containers[0].image}{"\n"}'
- If your labels differ, target the known deployment name instead:
kubectl -n <namespace> get deploy <your-api-deployment> \
-o jsonpath='{.spec.template.spec.containers[0].image}{"\n"}'
Option C — Helm (if you installed with the chart)
helm -n <namespace> status <release-name>
# Look for App Version in the output
Option D — UI
- In the OSS UI, open the About/System info panel (usually in the footer or help menu). It shows the server and UI versions. Surprisingly civilized.
If you share your namespace and how you installed (Helm vs. custom manifests), I can give you the exact `kubectl`/`helm` command for your setup.Kate
08/20/2025, 1:47 PMMarvin
08/20/2025, 1:47 PMMarvin
08/20/2025, 1:48 PMkubectl -n <ns> get svc -l <http://app.kubernetes.io/component=api|app.kubernetes.io/component=api>
- Port-forward and curl the API:
kubectl -n <ns> port-forward svc/<api-svc-name> 4200:4200
curl -s <http://localhost:4200/api/version>
# => {"version":"3.x.y"}
Option 2 — From inside the API pod (bypasses ingress/path issues)
kubectl -n <ns> get pods -l <http://app.kubernetes.io/component=api|app.kubernetes.io/component=api>
kubectl -n <ns> exec -ti <api-pod> -- sh -lc 'prefect version || python - <<PY
import prefect; print(prefect.__version__)
PY'
Option 3 — If you must use your ingress URL
- Make sure it points to the API under /api. Then query:
curl -s <https://your-hostname/api/version>
If you hit https://your-hostname/version (no /api), you’ll get the UI’s version. That’s your mysterious 0.8.4.
Bonus sanity check — read the API image tag
kubectl -n <ns> get deploy -l <http://app.kubernetes.io/component=api|app.kubernetes.io/component=api> \
-o jsonpath='{.items[0].spec.template.spec.containers[0].image}{"\n"}'
The tag typically matches the server version.
If you share your namespace and service/ingress names, I’ll give you the exact commands for your setup.Kate
08/22/2025, 10:50 AMMarvin
08/22/2025, 10:50 AMMarvin
08/22/2025, 10:53 AMpull
section in deployment.yaml
is not for container registries. It’s for pulling your code or other artifacts at run time (e.g., cloning a Git repo, pulling from cloud storage). Pulling a private container image is handled by your runtime/infrastructure, not by Prefect’s pull
steps.
How to handle private registries (Prefect 3.x):
- Docker work pool: Make sure the machine running the Docker worker can pull from the registry (e.g., docker login
or a configured credential store). In your deployment, just reference the image in job_variables.image
. No pull
step needed.
- Kubernetes work pool: Use Kubernetes imagePullSecrets
or a ServiceAccount that references a docker-registry secret. Your work pool/job template (or job_variables
) should include imagePullSecrets
. No Prefect pull
step needed.
- AWS ECS work pool: Use an ECS task execution role (e.g., with AmazonEC2ContainerRegistryReadOnly
) for ECR. For other registries, configure repository credentials in the task definition. No Prefect pull
step needed.
When you would use a pull
step:
- If your flow code isn’t baked into the image and you want the worker to fetch it at runtime (e.g., from a private Git repo), or you need to run setup commands like installing requirements.
Example pull
for a private Git repo:
deployments:
- name: my-deployment
entrypoint: flows/my_flow.py:flow
pull:
- prefect.deployments.steps.git_clone:
repository: <https://github.com/org/repo.git>
access_token: "{{ prefect.blocks.secret.github_token }}"
branch: main
Helpful docs:
- Prefect YAML (pull steps): Prefect 3: prefect.yaml and steps
- Kubernetes worker/Helm and imagePullSecrets: Kubernetes worker examples
- ECS guide (execution role/ECR): Prefect AWS ECS guide
If you can share which work pool you’re using (Docker, Kubernetes, ECS) and which registry (ECR, GHCR, Docker Hub, etc.), I can give you the exact config to set.Kate
08/22/2025, 2:19 PMMarvin
08/22/2025, 2:19 PMMarvin
08/22/2025, 2:22 PMjob_variables.env
. These get injected into the runtime container/pod/task for each flow run.
Example `prefect.yaml`:
deployments:
- name: my-deployment
entrypoint: flows/my_flow.py:flow
work_pool: { name: my-pool }
job_variables:
env:
LOG_LEVEL: "INFO"
FEATURE_FLAG: "on"
Docs: Customize job variables
2) Secrets with Prefect’s Secret block (if you’re okay storing the secret in Prefect)
- Create a Secret block and reference it with Jinja in prefect.yaml
. This avoids hardcoding values and keeps them encrypted at rest in Prefect.
Create/save the block:
from prefect.blocks.system import Secret
Secret(value="super-secret-value").save("my_api_key", overwrite=True)
Reference it in your deployment:
deployments:
- name: my-deployment
...
job_variables:
env:
API_KEY: "{{ prefect.blocks.secret.my_api_key }}"
Docs:
- Blocks overview: Blocks
- Storing secrets: Store secrets
- prefect.yaml templating: prefect.yaml
3) Use HashiCorp Vault directly (recommended if you already centralize secrets in Vault)
You’ve got two common patterns:
- Fetch at runtime in code with the Vault API (e.g., via the hvac client)
- Supply only non-sensitive auth material to the runtime (e.g., Vault address and a short-lived token or enable an auth method like Kubernetes/JWT/AppRole).
- In your flow, fetch secrets from Vault just-in-time.
Example:
import os
import hvac
from prefect import flow, get_run_logger
@flow
def my_flow():
logger = get_run_logger()
client = hvac.Client(
url=os.environ["VAULT_ADDR"],
token=os.environ.get("VAULT_TOKEN"), # or authenticate via approle/k8s/jwt
)
secret = client.secrets.kv.v2.read_secret_version(path="app/db")["data"]["data"]["password"]
<http://logger.info|logger.info>("Fetched secret from Vault")
# use `secret` here...
if __name__ == "__main__":
my_flow()
- Provide the minimal env needed to authenticate in your deployment:
job_variables:
env:
VAULT_ADDR: "<https://vault.mycompany.com>"
VAULT_TOKEN: "{{ prefect.blocks.secret.vault_token }}" # or use k8s/jwt/approle auth instead
hvac client: hvac on GitHub
- Inject via Vault Agent (sidecar or init) so your code reads from files/env variables
- Kubernetes: Use the Vault Agent Injector to add pod annotations that render secrets to a file volume (or envs). Reference those via your job template or job_variables
. Your flow reads the file (e.g., /vault/secrets/app/db
).
- Docker/ECS: Run a Vault Agent sidecar or an init process to materialize secrets to a file, then read them at runtime.
For Kubernetes specifically, you’d typically:
- Add the Vault Injector annotations to the pod via your work pool’s base job template or job_variables
(annotations), instructing Vault which secret paths/templating to use.
- (If using file rendering) mount the injected volume and read the file in your flow.
Which approach should you use?
- Already standardized on Vault and want secrets to remain there: Use Vault Agent or fetch via code (hvac). This keeps Prefect unaware of secret values.
- Want the simplest setup inside Prefect: Use the Secret block and Jinja references in prefect.yaml
.
If you can share:
- Which work pool you’re using (Docker, Kubernetes, ECS)
- How you prefer secrets to be exposed to your code (env vars vs files)
- Your Vault auth method (token, AppRole, Kubernetes/JWT, etc.)Marvin
08/22/2025, 2:22 PMKate
08/22/2025, 2:39 PMMarvin
08/22/2025, 2:39 PMMarvin
08/22/2025, 2:42 PMworker.extraEnvVarsSecret
lets you inject environment variables into the worker Pod from a single, existing Kubernetes Secret. It affects the worker process only—not the flow-run Pods your worker launches.
Key points:
- Use worker.extraEnvVarsSecret
(or worker.extraEnvVars
/ worker.envFrom
) for variables the worker itself needs, like PREFECT_API_URL
, PREFECT_API_KEY
, or default Vault settings the worker may use.
- These do NOT automatically propagate to your flow-run Pods. To pass env vars to flow runs on Kubernetes, set them in your deployment’s job_variables.env
or job_variables.envFrom
(or via your work pool’s base job template).
How to use worker.extraEnvVarsSecret:
1) Create a Kubernetes Secret with your worker env vars:
kubectl create secret generic prefect-worker-env \
--from-literal=PREFECT_API_URL=<https://cloud.prefect.io/api/accounts/.../workspaces/>... \
--from-literal=PREFECT_API_KEY=xxxxx
2) Reference it in your Helm values:
worker:
extraEnvVarsSecret: prefect-worker-env
Other related knobs in the chart:
- `worker.extraEnvVars`: inline key/value env vars on the worker
worker:
extraEnvVars:
PREFECT_LOGGING_LEVEL: INFO
- `worker.envFrom`: import all keys from one or more Secrets/ConfigMaps
worker:
envFrom:
- secretRef:
name: prefect-worker-env
- configMapRef:
name: some-config
Passing env vars to your flows (Kubernetes runs):
- Configure per-deployment in prefect.yaml
with job_variables.env
or `job_variables.envFrom`:
deployments:
- name: my-deployment
work_pool: { name: my-k8s-pool }
job_variables:
env:
LOG_LEVEL: "INFO"
envFrom:
- secretRef:
name: my-app-secrets
- If you use Vault, either:
- Inject via Vault Agent on the flow Pod (annotations/sidecar) and read from files/env in your code, or
- Fetch from Vault at runtime in code (e.g., via hvac), passing only the minimal auth/env via job_variables.env
.
Docs and references:
- Deploy a worker with Helm: Kubernetes worker with Helm
- Customize job variables (env/envFrom): Customize job variables
- Helm chart (values and examples): prefect-worker chart and values.yaml
If you tell me which variables should live on the worker vs the flow-run Pods (and your Vault auth method), I can give you exact values.yaml and prefect.yaml
snippets.