<@ULVA73B9P> I have deployed Prefect server and wo...
# ask-marvin
k
@Marvin I have deployed Prefect server and worker on kubernetes, I need prefect worker use a pre-build container image. How do I configure that? I'm on Prefect 3, OSS
m
thought for 281.9 seconds
Just to confirm: do you want - the image used for your flow runs (the pods the worker launches), or - the image for the worker pod itself? I’ll cover both, since Kubernetes is nothing if not options. 1) Use a pre-built image for flow runs (Kubernetes work pool) - Create or update a Kubernetes work pool and set a default image in its base job template, or override per deployment. Option A — set a default image at the work pool level - Get the default template:
Copy code
prefect work-pool get-default-base-job-template --type kubernetes --file base-template.json
- Edit
base-template.json
and set the default image under the variables section (e.g.,
variables.properties.image.default
) or hardcode it in the job manifest (replace
{{ image }}
). - Create or overwrite the pool with that template:
Copy code
prefect work-pool create "my-k8s-pool" --type kubernetes --base-job-template base-template.json
# If the pool already exists, add --overwrite
Option B — set the image per deployment (recommended for flexibility) - In Prefect 3.x, pass
job_variables
when deploying:
Copy code
from prefect import flow

@flow
def my_flow():
    print("hello")

my_flow.from_source(
    source="<https://github.com/you/your-repo>",
    entrypoint="flows/my_flow.py:my_flow",
).deploy(
    name="prod",
    work_pool_name="my-k8s-pool",
    job_variables={
        "image": "<http://ghcr.io/you/your-image:tag|ghcr.io/you/your-image:tag>",
        "image_pull_policy": "IfNotPresent",
        "namespace": "your-namespace",
        "service_account_name": "prefect-worker",
    },
)
Notes for flow-run images: - The image must contain your flow’s runtime dependencies. If you’re using
from_source
, include git and any build tools your install requires. - For private registries, add
imagePullSecrets
to your base job template’s pod spec and make sure your service account can pull it. - The worker’s image does not affect the image of your flow-run pods; that’s controlled by the work pool/deployment config above. 2) Use a pre-built image for the worker pod itself - If you installed the worker via Helm, set the image in your values:
Copy code
image:
  repository: <http://ghcr.io/you/prefect-worker|ghcr.io/you/prefect-worker>
  tag: 3.0.10
  pullPolicy: IfNotPresent
- Or, if you run your own Deployment, set it there:
Copy code
apiVersion: apps/v1
kind: Deployment
metadata:
  name: prefect-worker
spec:
  replicas: 1
  selector:
    matchLabels:
      app: prefect-worker
  template:
    metadata:
      labels:
        app: prefect-worker
    spec:
      serviceAccountName: prefect-worker
      containers:
        - name: worker
          image: <http://ghcr.io/you/prefect-worker:3.0.10|ghcr.io/you/prefect-worker:3.0.10>
          args: ["prefect", "worker", "start", "-p", "my-k8s-pool"]
          env:
            - name: PREFECT_API_URL
              value: "<http://prefect-server/api>"
            # PREFECT_API_KEY if you use auth, etc.
Quick sanity checks - Your work pool should be type `kubernetes`:
Copy code
prefect work-pool ls
- You can preview the default K8s job template:
Copy code
prefect work-pool get-default-base-job-template --type kubernetes
If you tell me whether you meant the flow-run pods or the worker pod, I’ll tailor this to your setup and provide a minimal, copy-pasteable config. I live to serve. Begrudgingly.
k
@Marvin I meant the flow-run pods and I use yaml file to define flows deployments
m
thought for 46.0 seconds
Great — flow-run pods it is. Since you’re using YAML deployments, set the image via the work pool’s job variables in your deployment file. Minimal deployment.yaml example
Copy code
deployments:
  - name: my-flow
    entrypoint: flows/my_flow.py:my_flow
    parameters: {}
    work_pool:
      name: my-k8s-pool          # must be a Kubernetes work pool
      job_variables:
        image: <http://ghcr.io/you/your-image:tag|ghcr.io/you/your-image:tag>
        image_pull_policy: IfNotPresent
        namespace: your-namespace
        service_account_name: prefect-worker
        env:
          PREFECT_LOGGING_LEVEL: INFO
    tags: ["prod"]
    schedules: []
Notes - The image must include your flow’s runtime deps and Prefect. - The worker image does not affect the pod image; the
image
here controls your flow-run pods. - For private registries, add imagePullSecrets at the work pool level (not per deployment). Fetch the base template, add it, then re-create/update the pool:
Copy code
prefect work-pool get-default-base-job-template --type kubernetes --file base-template.json
  # Edit base-template.json to add, e.g.:
  # spec.template.spec.imagePullSecrets: [{ "name": "regcred" }]
  prefect work-pool create "my-k8s-pool" --type kubernetes --base-job-template base-template.json --overwrite
- If you prefer setting the image globally for all deployments in the pool, you can hardcode it in that same base job template (replace
{{ image }}
with your image). If you share your current YAML, I’ll tweak it to be exactly right. Otherwise, apply your deployments the way you do today and your runs should launch with the specified image. Try not to look surprised when it works.
k
@Marvin How do I check the Prefect server version in kubernetes?
m
thought for 47.6 seconds
Got it — checking the version of the Prefect Server (API) running in your cluster. Pick your poison: Option A — Ask the API (most direct) - If you know your API URL (it should point at the /api root), just curl the version endpoint:
Copy code
curl -s "$PREFECT_API_URL/version"
# example output: {"version":"3.0.10"}
- If you don’t have external access, port-forward to the API service first:
Copy code
kubectl -n <namespace> port-forward svc/<api-service-name> 4200:4200
curl -s <http://localhost:4200/api/version>
Option B — Read the API pod’s image tag (works even if the API is firewalled) - The image tag usually matches the Prefect server version:
Copy code
kubectl -n <namespace> get deploy \
  -l <http://app.kubernetes.io/component=api|app.kubernetes.io/component=api> \
  -o jsonpath='{.items[0].spec.template.spec.containers[0].image}{"\n"}'
- If your labels differ, target the known deployment name instead:
Copy code
kubectl -n <namespace> get deploy <your-api-deployment> \
  -o jsonpath='{.spec.template.spec.containers[0].image}{"\n"}'
Option C — Helm (if you installed with the chart)
Copy code
helm -n <namespace> status <release-name>
# Look for App Version in the output
Option D — UI - In the OSS UI, open the About/System info panel (usually in the footer or help menu). It shows the server and UI versions. Surprisingly civilized. If you share your namespace and how you installed (Helm vs. custom manifests), I can give you the exact `kubectl`/`helm` command for your setup.
k
@Marvin The first option returns "0.8.4" which can't be true
m
thought for 44.4 seconds
“0.8.4” usually means you’re talking to the UI (or the wrong path), not the API. In Prefect 3, the API version lives at /api/version. If your PREFECT_API_URL doesn’t end with /api, you’ll hit the UI’s /version and see something like 0.8.x. Do this to be sure: Option 1 — Port-forward directly to the API service and ask the right endpoint - Find the API service:
Copy code
kubectl -n <ns> get svc -l <http://app.kubernetes.io/component=api|app.kubernetes.io/component=api>
- Port-forward and curl the API:
Copy code
kubectl -n <ns> port-forward svc/<api-svc-name> 4200:4200
curl -s <http://localhost:4200/api/version>
# => {"version":"3.x.y"}
Option 2 — From inside the API pod (bypasses ingress/path issues)
Copy code
kubectl -n <ns> get pods -l <http://app.kubernetes.io/component=api|app.kubernetes.io/component=api>
kubectl -n <ns> exec -ti <api-pod> -- sh -lc 'prefect version || python - <<PY
import prefect; print(prefect.__version__)
PY'
Option 3 — If you must use your ingress URL - Make sure it points to the API under /api. Then query:
Copy code
curl -s <https://your-hostname/api/version>
If you hit https://your-hostname/version (no /api), you’ll get the UI’s version. That’s your mysterious 0.8.4. Bonus sanity check — read the API image tag
Copy code
kubectl -n <ns> get deploy -l <http://app.kubernetes.io/component=api|app.kubernetes.io/component=api> \
  -o jsonpath='{.items[0].spec.template.spec.containers[0].image}{"\n"}'
The tag typically matches the server version. If you share your namespace and service/ingress names, I’ll give you the exact commands for your setup.
k
@Marvin do I need to define the pull step in my deployment.yaml when pulling from a private registry?
m
thought for 158.5 seconds
Short answer: No. The
pull
section in
deployment.yaml
is not for container registries. It’s for pulling your code or other artifacts at run time (e.g., cloning a Git repo, pulling from cloud storage). Pulling a private container image is handled by your runtime/infrastructure, not by Prefect’s
pull
steps. How to handle private registries (Prefect 3.x): - Docker work pool: Make sure the machine running the Docker worker can pull from the registry (e.g.,
docker login
or a configured credential store). In your deployment, just reference the image in
job_variables.image
. No
pull
step needed. - Kubernetes work pool: Use Kubernetes
imagePullSecrets
or a ServiceAccount that references a docker-registry secret. Your work pool/job template (or
job_variables
) should include
imagePullSecrets
. No Prefect
pull
step needed. - AWS ECS work pool: Use an ECS task execution role (e.g., with
AmazonEC2ContainerRegistryReadOnly
) for ECR. For other registries, configure repository credentials in the task definition. No Prefect
pull
step needed. When you would use a
pull
step: - If your flow code isn’t baked into the image and you want the worker to fetch it at runtime (e.g., from a private Git repo), or you need to run setup commands like installing requirements. Example
pull
for a private Git repo:
Copy code
deployments:
  - name: my-deployment
    entrypoint: flows/my_flow.py:flow
    pull:
      - prefect.deployments.steps.git_clone:
          repository: <https://github.com/org/repo.git>
          access_token: "{{ prefect.blocks.secret.github_token }}"
          branch: main
Helpful docs: - Prefect YAML (pull steps): Prefect 3: prefect.yaml and steps - Kubernetes worker/Helm and imagePullSecrets: Kubernetes worker examples - ECS guide (execution role/ECR): Prefect AWS ECS guide If you can share which work pool you’re using (Docker, Kubernetes, ECS) and which registry (ECR, GHCR, Docker Hub, etc.), I can give you the exact config to set.
k
@Marvin thanks for help, my flows are deployed. But, they need some env variables: some a just simple env vars and some are secrets managed by hashicorp vault. How do I provide them?
m
thought for 194.7 seconds
Great question. In Prefect 3.x you have a few options, depending on whether you want to manage values directly in Prefect or keep everything sourced from Vault. 1) Simple environment variables via deployment - Add them to your deployment’s
job_variables.env
. These get injected into the runtime container/pod/task for each flow run. Example `prefect.yaml`:
Copy code
deployments:
  - name: my-deployment
    entrypoint: flows/my_flow.py:flow
    work_pool: { name: my-pool }
    job_variables:
      env:
        LOG_LEVEL: "INFO"
        FEATURE_FLAG: "on"
Docs: Customize job variables 2) Secrets with Prefect’s Secret block (if you’re okay storing the secret in Prefect) - Create a Secret block and reference it with Jinja in
prefect.yaml
. This avoids hardcoding values and keeps them encrypted at rest in Prefect. Create/save the block:
Copy code
from prefect.blocks.system import Secret
Secret(value="super-secret-value").save("my_api_key", overwrite=True)
Reference it in your deployment:
Copy code
deployments:
  - name: my-deployment
    ...
    job_variables:
      env:
        API_KEY: "{{ prefect.blocks.secret.my_api_key }}"
Docs: - Blocks overview: Blocks - Storing secrets: Store secrets - prefect.yaml templating: prefect.yaml 3) Use HashiCorp Vault directly (recommended if you already centralize secrets in Vault) You’ve got two common patterns: - Fetch at runtime in code with the Vault API (e.g., via the hvac client) - Supply only non-sensitive auth material to the runtime (e.g., Vault address and a short-lived token or enable an auth method like Kubernetes/JWT/AppRole). - In your flow, fetch secrets from Vault just-in-time. Example:
Copy code
import os
import hvac
from prefect import flow, get_run_logger

@flow
def my_flow():
  logger = get_run_logger()
  client = hvac.Client(
    url=os.environ["VAULT_ADDR"],
    token=os.environ.get("VAULT_TOKEN"),  # or authenticate via approle/k8s/jwt
  )
  secret = client.secrets.kv.v2.read_secret_version(path="app/db")["data"]["data"]["password"]
  <http://logger.info|logger.info>("Fetched secret from Vault")
  # use `secret` here...

if __name__ == "__main__":
  my_flow()
- Provide the minimal env needed to authenticate in your deployment:
Copy code
job_variables:
  env:
    VAULT_ADDR: "<https://vault.mycompany.com>"
    VAULT_TOKEN: "{{ prefect.blocks.secret.vault_token }}"  # or use k8s/jwt/approle auth instead
hvac client: hvac on GitHub - Inject via Vault Agent (sidecar or init) so your code reads from files/env variables - Kubernetes: Use the Vault Agent Injector to add pod annotations that render secrets to a file volume (or envs). Reference those via your job template or
job_variables
. Your flow reads the file (e.g.,
/vault/secrets/app/db
). - Docker/ECS: Run a Vault Agent sidecar or an init process to materialize secrets to a file, then read them at runtime. For Kubernetes specifically, you’d typically: - Add the Vault Injector annotations to the pod via your work pool’s base job template or
job_variables
(annotations), instructing Vault which secret paths/templating to use. - (If using file rendering) mount the injected volume and read the file in your flow. Which approach should you use? - Already standardized on Vault and want secrets to remain there: Use Vault Agent or fetch via code (hvac). This keeps Prefect unaware of secret values. - Want the simplest setup inside Prefect: Use the Secret block and Jinja references in
prefect.yaml
. If you can share: - Which work pool you’re using (Docker, Kubernetes, ECS) - How you prefer secrets to be exposed to your code (env vars vs files) - Your Vault auth method (token, AppRole, Kubernetes/JWT, etc.)
…I can provide concrete YAML/job template snippets for your setup.
k
@Marvin what about worker.extraEnvVarsSecret in prefect worker helm chart?
m
thought for 199.8 seconds
Great callout. In the Prefect worker Helm chart,
worker.extraEnvVarsSecret
lets you inject environment variables into the worker Pod from a single, existing Kubernetes Secret. It affects the worker process only—not the flow-run Pods your worker launches. Key points: - Use
worker.extraEnvVarsSecret
(or
worker.extraEnvVars
/
worker.envFrom
) for variables the worker itself needs, like
PREFECT_API_URL
,
PREFECT_API_KEY
, or default Vault settings the worker may use. - These do NOT automatically propagate to your flow-run Pods. To pass env vars to flow runs on Kubernetes, set them in your deployment’s
job_variables.env
or
job_variables.envFrom
(or via your work pool’s base job template). How to use worker.extraEnvVarsSecret: 1) Create a Kubernetes Secret with your worker env vars:
Copy code
kubectl create secret generic prefect-worker-env \
  --from-literal=PREFECT_API_URL=<https://cloud.prefect.io/api/accounts/.../workspaces/>... \
  --from-literal=PREFECT_API_KEY=xxxxx
2) Reference it in your Helm values:
Copy code
worker:
  extraEnvVarsSecret: prefect-worker-env
Other related knobs in the chart: - `worker.extraEnvVars`: inline key/value env vars on the worker
Copy code
worker:
  extraEnvVars:
    PREFECT_LOGGING_LEVEL: INFO
- `worker.envFrom`: import all keys from one or more Secrets/ConfigMaps
Copy code
worker:
  envFrom:
    - secretRef:
        name: prefect-worker-env
    - configMapRef:
        name: some-config
Passing env vars to your flows (Kubernetes runs): - Configure per-deployment in
prefect.yaml
with
job_variables.env
or `job_variables.envFrom`:
Copy code
deployments:
  - name: my-deployment
    work_pool: { name: my-k8s-pool }
    job_variables:
      env:
        LOG_LEVEL: "INFO"
      envFrom:
        - secretRef:
            name: my-app-secrets
- If you use Vault, either: - Inject via Vault Agent on the flow Pod (annotations/sidecar) and read from files/env in your code, or - Fetch from Vault at runtime in code (e.g., via hvac), passing only the minimal auth/env via
job_variables.env
. Docs and references: - Deploy a worker with Helm: Kubernetes worker with Helm - Customize job variables (env/envFrom): Customize job variables - Helm chart (values and examples): prefect-worker chart and values.yaml If you tell me which variables should live on the worker vs the flow-run Pods (and your Vault auth method), I can give you exact values.yaml and
prefect.yaml
snippets.