<@ULVA73B9P> I am using a Prefect Kubernetes worke...
# marvin-ai
r
@Marvin I am using a Prefect Kubernetes worker, but the official container image seems to carry many known vulnerabilities (probably inherited from the base image used). Is there a way to build a custom container image from my own Python base image? Has somebody already done that in the past? (from the forum history or something maybe?)
m
thought for 443.6 seconds
Short answer: yes. You can (and many users do) run both the Kubernetes worker and your flow-run jobs from custom images built on your organization’s Python base. In Prefect 3.x you have two images to think about: - Worker image: the long-lived pod that polls your work pool - Flow-run image: the ephemeral Kubernetes Job pod that actually runs your flow Below are concrete ways to do both, plus links. 1) Build your own image(s) - For the Kubernetes worker (needs Prefect + the Kubernetes client)
Copy code
# Use your org’s base Python image
FROM <http://registry.example.com/python:3.11-slim|registry.example.com/python:3.11-slim>

ENV PIP_NO_CACHE_DIR=1 PYTHONDONTWRITEBYTECODE=1 PYTHONUNBUFFERED=1

# Create non-root user
RUN addgroup --system prefect && adduser --system --ingroup prefect --uid 1001 prefect

# Install Prefect and the k8s extra
RUN pip install "prefect[kubernetes]==3.*"

USER prefect
WORKDIR /app

# (Optional) copy helper scripts/configs if you have them
# COPY ./worker/ /app/

# Default command can be the worker; you’ll usually override args via your Deployment/Helm
CMD ["prefect", "worker", "start", "-p", "my-k8s-pool"]
- For flow-run jobs (needs Prefect + your code deps; does not need the k8s client)
Copy code
FROM <http://registry.example.com/python:3.11-slim|registry.example.com/python:3.11-slim>

ENV PIP_NO_CACHE_DIR=1 PYTHONDONTWRITEBYTECODE=1 PYTHONUNBUFFERED=1

RUN addgroup --system app && adduser --system --ingroup app --uid 1001 app
USER app
WORKDIR /app

# Install your app deps + Prefect runtime
COPY pyproject.toml uv.lock* requirements*.txt ./  # whichever you use
# Example with pip:
RUN pip install --no-cache-dir -r requirements.txt && pip install "prefect==3.*"

# Add your project code
COPY . /app
Security tips to reduce CVEs: - Start from your vetted corporate base (often already patched/scanned). - Prefer slim/distroless bases where possible; run as non-root; minimize OS packages. - Pin dependency versions and rebuild regularly to pick up upstream patches. 2) Run the worker with your image - If you use Kubernetes manifests or Helm, just point to your image. With the Prefect Helm chart you can override the worker image via values. - The worker only needs: - Your image - Prefect API connection details (Cloud:
PREFECT_API_KEY
and `PREFECT_API_URL`; OSS:
PREFECT_API_URL
) - The command
prefect worker start -p <your-work-pool>
- Docs: Kubernetes worker and work pool guide - Helm chart: prefect-helm 3) Make your flow runs use your custom image You have two easy options: Option A — Set a default image on the Kubernetes work pool (base job template) - Fetch the default job template:
Copy code
prefect work-pool get-default-base-job-template --type kubernetes > base-template.json
- Edit the image field in the template to your image, then create/update the pool:
Copy code
prefect work-pool create "my-k8s-pool" --type kubernetes --base-job-template ./base-template.json --overwrite
- Now all deployments targeting this pool will default to your image. Option B — Set the image per deployment - In code with Prefect 3.x:
Copy code
from prefect import flow

@flow(log_prints=True)
def my_flow():
    print("Hello from my custom image!")

if __name__ == "__main__":
    my_flow.deploy(
        name="prod",
        work_pool_name="my-k8s-pool",
        image="<http://registry.example.com/prefect/flows:1.0.0|registry.example.com/prefect/flows:1.0.0>",  # your image
    )
- Or via a project’s prefect.yaml + `prefect deploy`: - Initialize a deployment recipe:
Copy code
prefect init
Choose a Docker/Kubernetes recipe and set: - build.dockerfile: your Dockerfile path - image: your registry/name:tag - Then:
Copy code
prefect deploy
- Docs: prefect.yaml reference Notes and references - CLI help we referenced: -
prefect work-pool get-default-base-job-template --help
-
prefect work-pool create --help
- Docs:
- Kubernetes deployment + job templates - prefect.yaml (build/push/image) - Community: It’s very common for teams to run Prefect workers and flows from custom, security-hardened images (either via Helm or their own Deployment manifests). The Helm chart and Kubernetes work pool docs show the standard patterns. A couple quick questions so I can tailor this to your setup: - Do you want the custom image for the worker, the flow-run jobs, or both? - Are you deploying the worker with Helm or your own Kubernetes manifests? - Which registry/base image does your org require (so I can suggest the exact Dockerfile snippets)?
r
@Marvin focusing on the worker only My base image already has a user/group, I would like to keep using them. Is that okay or does the worker needs to have a user named prefect and a folder named /app? are there any requirements that are missing if I just do this:
Copy code
FROM <my-base-image>

# Install Prefect and the k8s extra
RUN pip install "prefect[kubernetes]==3.*"

# Default command can be the worker; you'll usually override args via your Deployment/Helm
CMD ["prefect", "worker", "start", "-p", "my-k8s-pool"]
m
thought for 42.8 seconds
Great focus. For the Kubernetes worker image, you do not need a specific user name (like
prefect
) or a specific working directory like
/app
. The worker is just a Python process that talks to the Prefect API and the Kubernetes API. Using your organization’s existing user/group is fine. Your minimal Dockerfile is valid as long as the following are true: - Python and pip are available - You can install Prefect with the Kubernetes extra - The container can reach the Prefect API over HTTPS (CA certs present) - The process runs with permissions to read environment variables (API URL/KEY) and make outbound HTTPS requests Your example:
Copy code
FROM <my-base-image>

# Install Prefect and the k8s extra
RUN pip install "prefect[kubernetes]==3.*"

# Default command can be the worker; you'll usually override args via your Deployment/Helm
CMD ["prefect", "worker", "start", "-p", "my-k8s-pool"]
That’s perfectly fine. A few practical notes to avoid surprises: - PATH/permissions: If your base image runs as a non-root user and
pip install
can’t write to system site-packages, either install as root then switch back, or use a venv. Example:
Copy code
USER root
  RUN pip install --no-cache-dir "prefect[kubernetes]==3.*"
  USER myuser
If PATH issues arise, you can reliably launch with:
Copy code
CMD ["python", "-m", "prefect", "worker", "start", "-p", "my-k8s-pool"]
- CA certificates: Ensure your base has CA certs so the worker can reach Prefect Cloud over HTTPS. On Debian/Ubuntu-based images this is typically already present; on Alpine you may need:
Copy code
RUN apk add --no-cache ca-certificates
- No required working directory: The worker does not require
/app
. It may create or read
~/.prefect
if present, so having a valid HOME that’s writable is nice, but not strictly required if you’re passing all config via env vars. - Kubernetes auth: The worker uses the in-cluster ServiceAccount. Make sure your Deployment/Helm assigns a ServiceAccount with RBAC to create/watch Jobs/Pods in the target namespace(s). This is a cluster config concern, not an image requirement. - Prefect API config: Provide
PREFECT_API_URL
and, for Cloud,
PREFECT_API_KEY
via env vars or secrets in your K8s manifest/Helm values. - CLI validation:
prefect worker start -p <pool>
is correct. See help:
prefect worker start --help
. Docs: - Kubernetes worker/work pool guide: Kubernetes deployment + workers If you want, I can tailor a Dockerfile to your exact base (Debian/Ubuntu/Alpine/distroless) and user setup.
r
@Marvin does not seem to work with the helm chart, I have this error: Error: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "/usr/bin/tini": stat /usr/bin/tini: no such file or directory: unknown
m
thought for 101.7 seconds
Thanks — that error makes sense with a custom base image. What’s happening - The Prefect Helm chart starts the worker container with Tini as the entrypoint (e.g.,
/usr/bin/tini -- …
) so it can properly forward signals and reap zombies. - Your base image doesn’t include Tini at
/usr/bin/tini
, so the container fails to start with “stat /usr/bin/tini: no such file or directory”. Two ways to fix it 1) Add Tini to your image (recommended) - Debian/Ubuntu base:
Copy code
FROM <my-base-image>
USER root
RUN apt-get update \
 && apt-get install -y --no-install-recommends tini ca-certificates \
 && rm -rf /var/lib/apt/lists/*
RUN pip install --no-cache-dir "prefect[kubernetes]==3.*"
# tini is at /usr/bin/tini on Debian/Ubuntu
- Alpine base:
Copy code
FROM <my-base-image>
USER root
RUN apk add --no-cache tini ca-certificates
# Alpine installs /sbin/tini; create the path Helm expects:
RUN ln -sf /sbin/tini /usr/bin/tini
RUN pip install --no-cache-dir "prefect[kubernetes]==3.*"
You don’t need to set ENTRYPOINT in your Dockerfile; the chart provides the command (which includes Tini). Keeping Tini is generally a good idea for proper signal handling. 2) Override the chart’s command/args to not use Tini If you prefer not to include Tini, you can override the container command in your Helm values to directly run the worker. The exact keys can vary by chart version; look for fields that map to the container’s
command
and
args
in the worker chart values. For example:
Copy code
worker:
  image:
    repository: <http://registry.example.com/prefect/worker|registry.example.com/prefect/worker>
    tag: "1.0.0"
  # Example keys — verify against your chart version
  command: ["python", "-m", "prefect", "worker", "start"]
  args: ["-p", "my-k8s-pool"]
If the chart does not expose
command/args
in values, you can: - Patch the rendered manifest with Kustomize or a Helm post-renderer to set
spec.template.spec.containers[0].command/args
, or - Fork/extend the chart to add those values. References - Helm chart repo: prefect-helm - Kubernetes worker/work pool guide: Prefect 3 Kubernetes guide If you share your base image family (Debian/Ubuntu vs Alpine vs distroless) and the Helm chart version you’re using, I can give you the exact Dockerfile commands and values.yaml overrides.
r
@Marvin it's an ubuntu based image, we don't want to add tini to it by changing the command in the values of the chart to
/bin/bash -c --
we have a different error, the log of the pod is
worker: line 1: prefect: command not found
the weird thing is that running prefect int he image with a docker run seems to work fine
Copy code
$ docker run --rm -i  murmurationsas/prefect-kubernetes-worker:3.4-python-3.11-dev1 prefect --version
3.4.17
the describe of the pod shows that (snippet extracted from the output):
Copy code
Command:
      /bin/bash
      -c
      --
    Args:
      prefect
      worker
      start
      --type
      kubernetes
      --pool
      desp-pool
      --install-policy
      prompt
      --with-healthcheck
m
thought for 137.7 seconds
Thanks for the details — this is a classic combo of two things: - Removing Tini from the chart command path - Switching to a shell-based command without giving bash the command as a single string and/or PATH differences under the chart’s runtime user What’s going on - The Helm chart was originally invoking the worker through Tini. Since your image doesn’t have Tini and you overrode the command to use bash, Kubernetes now runs: - command: /bin/bash -c -- - args: prefect worker start --type kubernetes ... - For bash, -c expects the next single argument to be the full command string. With your current split args, bash treats “--” as the command string (a literal “--”), then sets “prefect” as $0 for the shell. Result: the real command never runs and you see errors like “prefect: command not found” (and/or “worker: line 1: …”) because bash isn’t actually executing the intended command string. Two robust ways to fix it (no Tini required) Option A: Avoid a shell and call Prefect directly - Use the absolute path to Prefect or use Python’s module runner to avoid PATH surprises. Example Helm values:
Copy code
# values.yaml (keys vary by chart version; use the fields that map to container.command / container.args)
worker:
  # Ensure you reference your image/tag here
  # image: ...

  command: ["/usr/bin/env", "python", "-m", "prefect", "worker", "start"]
  args:
    - "--type"
    - "kubernetes"
    - "--pool"
    - "desp-pool"
    - "--install-policy"
    - "prompt"
    - "--with-healthcheck"
Notes: - /usr/bin/env python -m prefect is very resilient to PATH issues. - If you prefer the binary, first verify its path inside your image: - docker run --rm -it <your-image> which prefect → likely /usr/local/bin/prefect - Then set:
Copy code
command: ["/usr/local/bin/prefect", "worker", "start"]
    args: ["--type","kubernetes","--pool","desp-pool","--install-policy","prompt","--with-healthcheck"]
Option B: Keep bash, but pass the command as a single string - If you really want to use bash, give it one string to execute (and optionally load login env with -l):
Copy code
worker:
  command: ["/bin/bash", "-lc"]
  args:
    - "prefect worker start --type kubernetes --pool desp-pool --install-policy prompt --with-healthcheck"
- The entire Prefect command must be a single arg string. Other things to double-check - PATH under the chart’s runtime user: The chart may set runAsNonRoot. If pip installed Prefect into /usr/local/bin, ensure that directory is on PATH for that user. Using python -m prefect avoids this. - CA certs: Ubuntu base usually has these, but confirm outbound HTTPS works to Prefect API. - Helm values keys: Different chart versions expose command/args under slightly different keys (e.g., worker.command / worker.args or container.command / container.args). If you share your chart version, I’ll point to the exact values keys. - Signal handling: Without Tini, signal propagation is handled by the invoked process (python/prefect). Most users are fine without Tini, but be aware Tini is typically used for proper reaping and signal forwarding. References - Prefect Helm chart repo: prefect-helm - Kubernetes workers guide: Prefect 3 Kubernetes guide If you can share: - Your Helm chart version - Output of
docker run --rm -it <your-image> which prefect
I can give you an exact values.yaml snippet tailored to your chart and image.
r
@Marvin in the official chart, the working dir and the home seems hardcoded https://github.com/PrefectHQ/prefect-helm/blob/main/charts/prefect-worker/templates/deployment.yaml#L217 relevant snippet extracted from there
Copy code
workingDir: /home/prefect
          env:
            - name: HOME
              value: /home/prefect
is this necessary? is there any work around this? btw here is my dockerfile
Copy code
FROM ${BASE_IMAGE}

ARG BASE_IMAGE \
    PREFECT_VERSION

ENV PREFECT_VERSION=${PREFECT_VERSION}

# Install Prefect and the k8s extra
RUN pip install "prefect[kubernetes]==${PREFECT_VERSION}.*"

CMD ["prefect", "worker", "start", "-p", "my-k8s-pool"]
The Base image is an in-house ubuntu python3.11 base image with its own VENV
m
thought for 75.8 seconds
You’re right: the official chart hardcodes both workingDir and HOME to /home/prefect. That’s not a Prefect requirement; it’s just a convention the chart uses so Prefect can put state under ~/.prefect and to have a consistent cwd. Prefect does not need a user named prefect or a specific home/working directory. You have two clean paths forward without adding Tini: Option A — Override via Helm values (recommended) - Set the container command/args to run Prefect explicitly (avoid PATH surprises). - Override HOME and workingDir to your image’s user/home. - Add PATH if your venv’s bin is not first on PATH. Example values.yaml (keys may vary slightly by chart version; use the ones that map to the container’s command/args/env/workingDir):
Copy code
worker:
  # Your image
  image:
    repository: murmurationsas/prefect-kubernetes-worker
    tag: "3.4-python-3.11-dev1"

  # Run Prefect via the venv’s Python to avoid PATH issues
  command: ["/usr/bin/env", "python", "-m", "prefect", "worker", "start"]
  args:
    - "--type"
    - "kubernetes"
    - "--pool"
    - "desp-pool"
    - "--install-policy"
    - "prompt"
    - "--with-healthcheck"

  # Make the container use your real home and CWD
  # Some chart versions expose these directly; if not, use podTemplate (below)
  workingDir: "/home/youruser"

  env:
    # Override the HOME set by the chart (ensure your env block is merged AFTER the default)
    - name: HOME
      value: /home/youruser
    # Ensure your venv bin is at the front of PATH
    - name: PATH
      value: "/path/to/your/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
If your chart version does not expose workingDir/env/command/args directly, most recent versions provide a podTemplate block you can use to override the container spec:
Copy code
worker:
  podTemplate:
    spec:
      containers:
        - name: worker
          workingDir: /home/youruser
          env:
            - name: HOME
              value: /home/youruser
            - name: PATH
              value: "/path/to/your/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
          command: ["/usr/bin/env", "python", "-m", "prefect", "worker", "start"]
          args: ["--type","kubernetes","--pool","desp-pool","--install-policy","prompt","--with-healthcheck"]
Tip: If the chart merges env lists with defaults, Kubernetes will take the last occurrence of a variable name. By putting your env entries after the chart’s defaults, your HOME will win. Option B — Image-level compatibility shim If overriding the chart isn’t convenient, make the chart’s defaults harmless in your image: - Create /home/prefect as a symlink to your actual home or as a writable directory for your user:
Copy code
RUN mkdir -p /home/youruser \
 && mkdir -p /home/prefect \
 && chown -R youruser:yourgroup /home/youruser /home/prefect \
 && ln -sfn /home/youruser /home/prefect
This way, even though the chart sets HOME=/home/prefect and workingDir=/home/prefect, the process actually lands in your user’s home. Why you saw “prefect: command not found” - When you switched to
/bin/bash -c --
the shell expected a single string to execute; splitting the command into args made bash treat “--” as the command, so your real command never ran. - Also, the runtime PATH inside the chart may not include your venv’s bin. That’s why invoking Prefect via
python -m prefect
or by absolute path to your venv binary is more reliable. Dockerfile note for venvs Since your base has a venv, install Prefect into that venv explicitly to avoid ambiguity:
Copy code
# Example: if your venv is at /opt/venv
RUN /opt/venv/bin/pip install --no-cache-dir "prefect[kubernetes]==${PREFECT_VERSION}.*"
Then run the worker with that venv’s Python in the chart:
Copy code
command: ["/opt/venv/bin/python", "-m", "prefect", "worker", "start"]
Links - Helm chart: prefect-helm
- K8s workers guide: Prefect 3 Kubernetes guide If you can share your Helm chart version and the venv path (e.g., /opt/venv), I’ll give you an exact values.yaml snippet for that chart version.