<@ULVA73B9P> i managed to deploy a flow onto the p...
# ask-marvin
y
@Marvin i managed to deploy a flow onto the prefect instance but the flow fails with not being able to set working directory. where is this specified and how can i work around it? the prefect server is a self-hosted server and the worker are both deployed on kubernetes using prefect-helm official charts
m
đź”§ Using:
explore_module_offerings
📊 Tools used: •
display_callable_signature
(4x) •
explore_module_offerings
(4x) •
research_prefect_topic
(4x) •
review_common_3x_gotchas
(4x) •
search_prefect_3x_docs
(4x)
âś… thought for 214.8 seconds
Short answer: the working directory for a Kubernetes-run flow is controlled by the container spec in your work pool’s base job template. If Kubernetes can’t set that directory (it doesn’t exist or isn’t writable), your flow will fail before Prefect even gets a chance to be clever. Where it’s specified - In Prefect 3.x Kubernetes work pools, the container working directory is the Pod’s
containers[].workingDir
field in the work pool’s base job template. - By default, Prefect’s Kubernetes template does not set
workingDir
. Kubernetes then uses the image’s default (often
/
), which is… unhelpful if you run as a non-root user. How to inspect and set it 1) Inspect your work pool and template
Copy code
prefect work-pool inspect <your-pool>
prefect work-pool get-default-base-job-template --type kubernetes > k8s-template.json
2) Edit
k8s-template.json
to set a safe working directory and ensure it exists. Typical pattern: - Mount an
emptyDir
(or your own PVC) at a path - Set
workingDir
to that path - Optionally set
HOME
and
TMPDIR
to writable locations Example snippet to add under `job_manifest.spec.template.spec`:
Copy code
"volumes": [
  { "name": "workdir", "emptyDir": {} }
],
"containers": [
  {
    "name": "prefect-job",
    "image": "{{ image }}",
    "args": "{{ command }}",
    "env": "{{ env }}",
    "imagePullPolicy": "{{ image_pull_policy }}",
    "workingDir": "/opt/prefect/work",
    "volumeMounts": [
      { "name": "workdir", "mountPath": "/opt/prefect/work" }
    ]
  }
],
"securityContext": {
  "fsGroup": 1000
}
Optionally add env for home/tmp:
Copy code
"env": [
  {"name": "HOME", "value": "/opt/prefect/work"},
  {"name": "TMPDIR", "value": "/tmp"}
]
3) Update the work pool
Copy code
prefect work-pool update <your-pool> --base-job-template k8s-template.json
Common causes of “cannot set working directory” -
workingDir
points to a path that doesn’t exist in the image and isn’t mounted. Kubernetes refuses to start the container. - The container runs as a non-root user but the directory is owned by root. Use
fsGroup
, adjust
runAsUser
, or bake the directory with correct permissions into your image. - Temp/home paths are not writable, causing Prefect’s code materialization to fail before
chdir
. Set
HOME
and/or
TMPDIR
to writable locations or mount a volume. Helm notes - The Helm chart deploys the worker; the flow run Jobs come from the work pool’s base job template (server-side). You generally fix working directory issues by editing the work pool template (via UI or CLI), not Helm values. - If you prefer, bake the directory into your image and set
WORKDIR
in the Dockerfile to something like
/opt/prefect/work
. Then you don’t need an explicit
workingDir
in the template. If you can share the exact error text and your work pool’s current job template (or the Deployment > Work Pool > Base job template from the UI), I can point out the minimal change. Docs and references - Kubernetes container workingDir: K8s API: Container.workingDir - Work pools and base job templates: Prefect docs: Base job template - Kubernetes work pools: Prefect docs: Kubernetes work pools
y
@Marvin can i specify it within prefect.yaml
m
âś… thought for 50.6 seconds
Short version: only if your work pool’s base job template exposes it as a variable. prefect.yaml can set job variables, but it cannot directly edit the Kubernetes Pod spec. What works - In prefect.yaml, you can set work pool job_variables that map to placeholders in the base job template. If your template includes something like
"workingDir": "{{ working_dir }}"
, you can do this:
Copy code
deployments:
  - name: my-deployment
    entrypoint: flows/flow.py:flow
    work_pool:
      name: my-k8s-pool
      job_variables:
        image: your/image:tag
        working_dir: /opt/prefect/work
        env:
          HOME: /opt/prefect/work
          TMPDIR: /tmp
But this only works if the base job template has that
{{ working_dir }}
placeholder. What you likely need to do once - Update the work pool’s base job template to add
workingDir
(and a volume) if it’s not already parameterized: 1) Export the default template:
Copy code
prefect work-pool get-default-base-job-template --type kubernetes > k8s-template.json
2) Edit it to include a writable mount and workingDir (example):
Copy code
"spec": {
    "template": {
      "spec": {
        "volumes": [{ "name": "workdir", "emptyDir": {} }],
        "containers": [{
          "name": "prefect-job",
          "image": "{{ image }}",
          "args": "{{ command }}",
          "env": "{{ env }}",
          "imagePullPolicy": "{{ image_pull_policy }}",
          "workingDir": "{{ working_dir }}",
          "volumeMounts": [{ "name": "workdir", "mountPath": "/opt/prefect/work" }]
        }],
        "restartPolicy": "Never",
        "serviceAccountName": "{{ service_account_name }}"
      }
    }
  }
3) Save and apply:
Copy code
prefect work-pool update my-k8s-pool --base-job-template k8s-template.json
4) Now set
working_dir
per-deployment in prefect.yaml (as above). Alternative - Bake it into your image and skip templating: set
WORKDIR /opt/prefect/work
in your Dockerfile and ensure the directory exists with correct permissions. This is the least exciting but the most reliable option. If you paste your exact error and your pool’s current base job template, I’ll point to the single change you need.
y
@Marvin can you give me the values to be fed into the worker helm chart to modify the job template to include a emptydir and mount it and sset that as the working dir?
m
âś… thought for 79.4 seconds
Short version: the Helm chart deploys the worker, but the flow-run Job spec (where
workingDir
, volumes, etc. live) comes from the work pool’s base job template on the server. You can’t change that just by tweaking the worker Pod. You either: - set the base job template via Helm (if your chart supports it), or - run a one-time init step (via Helm) that calls
prefect work-pool update --base-job-template …
. Below are both recipes. Pick the one your chart version supports. Option A — chart supports setting base job template in values Some chart versions support a
workPool.baseJobTemplate
block. If you see it in your chart docs, use something like:
Copy code
worker:
  enabled: true

  # The work pool the worker will poll
  workPool:
    name: my-k8s-pool
    type: kubernetes

    # Server-side base job template that defines the Job manifest for flow runs
    baseJobTemplate:
      job_configuration:
        command: "{{ command }}"
        env: "{{ env }}"
        labels: "{{ labels }}"
        name: "{{ name }}"
        namespace: "{{ namespace }}"
        job_manifest:
          apiVersion: batch/v1
          kind: Job
          metadata:
            generateName: "{{ name }}-"
            labels: "{{ labels }}"
            namespace: "{{ namespace }}"
          spec:
            backoffLimit: 0
            ttlSecondsAfterFinished: "{{ finished_job_ttl }}"
            template:
              spec:
                restartPolicy: Never
                serviceAccountName: "{{ service_account_name }}"
                volumes:
                  - name: workdir
                    emptyDir: {}
                containers:
                  - name: prefect-job
                    image: "{{ image }}"
                    imagePullPolicy: "{{ image_pull_policy }}"
                    args: "{{ command }}"
                    env: "{{ env }}"
                    workingDir: "/opt/prefect/work"
                    volumeMounts:
                      - name: workdir
                        mountPath: "/opt/prefect/work"
        cluster_config: "{{ cluster_config }}"
        job_watch_timeout_seconds: "{{ job_watch_timeout_seconds }}"
        pod_watch_timeout_seconds: "{{ pod_watch_timeout_seconds }}"
        stream_output: "{{ stream_output }}"
Option B — bootstrap the base job template with an init container If your chart does not expose
baseJobTemplate
, add an init container that writes the template and updates the work pool on startup: ``` worker: enabled: true # Make sure these are set appropriately extraEnv: - name: PREFECT_API_URL value: https://prefect.your.domain/api - name: PREFECT_API_KEY valueFrom: secretKeyRef: name: prefect-api key: key - name: PREFECT_WORK_POOL_NAME value: my-k8s-pool extraInitContainers: - name: set-base-job-template image: prefecthq/prefect:3-latest env: - name: PREFECT_API_URL valueFrom: fieldRef: fieldPath: metadata.annotations['prefect.io/api-url'] # or inherit from pod env above - name: PREFECT_API_KEY valueFrom: secretKeyRef: name: prefect-api key: key - name: PREFECT_WORK_POOL_NAME value: my-k8s-pool command: - sh - -ec - | cat >/config/job-template.json <<'EOF' { "job_configuration": { "command": "{{ command }}", "env": "{{ env }}", "labels": "{{ labels }}", "name": "{{ name }}", "namespace": "{{ namespace }}", "job_manifest": { "apiVersion": "batch/v1", "kind": "Job", "metadata": { "generateName": "{{ name }}-", "labels": "{{ labels }}", "namespace": "{{ namespace }}" }, "spec": {
"backoffLimit": 0, "ttlSecondsAfterFinished": "{{ finished_job_ttl }}", "template": { "spec": { "restartPolicy": "Never", "serviceAccountName": "{{ service_account_name }}", "volumes": [ { "name": "workdir", "emptyDir": {} } ], "containers": [ { "name": "prefect-job", "image": "{{ image }}", "imagePullPolicy": "{{ image_pull_policy }}", "args": "{{ command }}", "env": "{{ env }}", "workingDir": "/opt/prefect/work", "volumeMounts": [ { "name": "workdir", "mountPath": "/opt/prefect/work" } ] } ] } } } }, "cluster_config": "{{ cluster_config }}", "job_watch_timeout_seconds": "{{ job_watch_timeout_seconds }}", "pod_watch_timeout_seconds": "{{ pod_watch_timeout_seconds }}", "stream_output": "{{ stream_output }}" } } EOF # Create the pool if it doesn't exist, then update the template prefect work-pool create --type kubernetes "${PREFECT_WORK_POOL_NAME}" || true prefect work-pool update "${PREFECT_WORK_POOL_NAME}" --base-job-template /config/job-template.json volumeMounts: - name: config mountPath: /config extraVolumes: - name: config emptyDir: {} ``` Notes - This sets a writable
emptyDir
at
/opt/prefect/work
and uses it as the container
workingDir
. Adjust path/permissions to suit your image’s user. - If you prefer less ceremony, set
WORKDIR /opt/prefect/work
in your Dockerfile and create the directory there. Then you can skip the
workingDir
in the template entirely. References, because I’m nothing if not predictable: - Prefect Helm chart: prefect-helm repo and chart docs - Base job template concept: Prefect docs: Base job template - K8s workingDir field: Kubernetes: Container.workingDir If you can share which chart version you’re on, I can tailor this to the exact values keys your chart exposes.
y
@Marvin please give me a minimal basejobtemplate json which adds emptydir mount with a max size of 100mb and mounts it to the job pod
m
âś… thought for 26.5 seconds
Here you go — minimal base job template that adds an emptyDir (100Mi), mounts it, and uses it as the working directory:
Copy code
{
  "job_configuration": {
    "command": "{{ command }}",
    "env": "{{ env }}",
    "labels": "{{ labels }}",
    "name": "{{ name }}",
    "namespace": "{{ namespace }}",
    "job_manifest": {
      "apiVersion": "batch/v1",
      "kind": "Job",
      "metadata": {
        "generateName": "{{ name }}-",
        "labels": "{{ labels }}",
        "namespace": "{{ namespace }}"
      },
      "spec": {
        "backoffLimit": 0,
        "ttlSecondsAfterFinished": "{{ finished_job_ttl }}",
        "template": {
          "spec": {
            "restartPolicy": "Never",
            "serviceAccountName": "{{ service_account_name }}",
            "volumes": [
              {
                "name": "workdir",
                "emptyDir": {
                  "sizeLimit": "100Mi"
                }
              }
            ],
            "containers": [
              {
                "name": "prefect-job",
                "image": "{{ image }}",
                "imagePullPolicy": "{{ image_pull_policy }}",
                "args": "{{ command }}",
                "env": "{{ env }}",
                "workingDir": "/opt/prefect/work",
                "volumeMounts": [
                  {
                    "name": "workdir",
                    "mountPath": "/opt/prefect/work"
                  }
                ]
              }
            ]
          }
        }
      }
    },
    "cluster_config": "{{ cluster_config }}",
    "job_watch_timeout_seconds": "{{ job_watch_timeout_seconds }}",
    "pod_watch_timeout_seconds": "{{ pod_watch_timeout_seconds }}",
    "stream_output": "{{ stream_output }}"
  }
}
Apply it:
Copy code
prefect work-pool update <your-k8s-pool> --base-job-template basejobtemplate.json
If you don’t want to change the working directory, just delete the
workingDir
line. For the concept details, see Base job template.
y
@Marvin do i need all the other details like serviceaccountname?
m
âś… thought for 33.9 seconds
No—you don’t need
serviceAccountName
(or a bunch of the other extras) unless you actually need a specific SA/RBAC. Kubernetes will use the namespace’s default service account if you omit it. Here’s a pared-down base job template that just adds a 100Mi emptyDir, mounts it, and sets it as the working directory. Everything else is kept to the minimum needed for Prefect to render and run.
Copy code
{
  "job_configuration": {
    "command": "{{ command }}",
    "env": "{{ env }}",
    "name": "{{ name }}",
    "namespace": "{{ namespace }}",
    "job_manifest": {
      "apiVersion": "batch/v1",
      "kind": "Job",
      "metadata": {
        "generateName": "{{ name }}-"
      },
      "spec": {
        "template": {
          "spec": {
            "restartPolicy": "Never",
            "volumes": [
              {
                "name": "workdir",
                "emptyDir": {
                  "sizeLimit": "100Mi"
                }
              }
            ],
            "containers": [
              {
                "name": "prefect-job",
                "image": "{{ image }}",
                "args": "{{ command }}",
                "env": "{{ env }}",
                "workingDir": "/opt/prefect/work",
                "volumeMounts": [
                  {
                    "name": "workdir",
                    "mountPath": "/opt/prefect/work"
                  }
                ]
              }
            ]
          }
        }
      }
    }
  }
}
Notes -
restartPolicy: Never
is important for Jobs; keep it. - You can also drop
labels
,
ttlSecondsAfterFinished
, and
imagePullPolicy
if you don’t need them. - If your cluster enforces specific RBAC, imagePullSecrets, or PodSecurity policies, you may need to add
serviceAccountName
back (or a
securityContext
). Apply it:
Copy code
prefect work-pool update <your-k8s-pool> --base-job-template basejobtemplate.json
For background on base job templates: Prefect docs: Base job template and the K8s workingDir field: Kubernetes: Container.workingDir
y
@Marvin don't i need a variable block as well?
m
âś… thought for 35.2 seconds
Short answer: only if you want to parameterize new fields. - If you hardcode the emptyDir size and workingDir in the job manifest, you can omit the variables block when updating an existing Kubernetes work pool. The pool already has the standard variables (image, command, env, name, namespace, etc.) that your template references. - If you want to control the working dir or size via prefect.yaml (job_variables), you must add matching entries to the base job template’s variables block. Example: keep your job manifest but make workingDir and size configurable Job manifest (uses new placeholders):
Copy code
{
  "job_configuration": {
    "command": "{{ command }}",
    "env": "{{ env }}",
    "name": "{{ name }}",
    "namespace": "{{ namespace }}",
    "job_manifest": {
      "apiVersion": "batch/v1",
      "kind": "Job",
      "metadata": {
        "generateName": "{{ name }}-"
      },
      "spec": {
        "template": {
          "spec": {
            "restartPolicy": "Never",
            "volumes": [
              {
                "name": "workdir",
                "emptyDir": {
                  "sizeLimit": "{{ workdir_size_limit }}"
                }
              }
            ],
            "containers": [
              {
                "name": "prefect-job",
                "image": "{{ image }}",
                "args": "{{ command }}",
                "env": "{{ env }}",
                "workingDir": "{{ working_dir }}",
                "volumeMounts": [
                  {
                    "name": "workdir",
                    "mountPath": "{{ working_dir }}"
                  }
                ]
              }
            ]
          }
        }
      }
    }
  },
  "variables": {
    "type": "object",
    "properties": {
      "working_dir": {
        "type": "string",
        "default": "/opt/prefect/work",
        "title": "Working directory"
      },
      "workdir_size_limit": {
        "type": "string",
        "default": "100Mi",
        "title": "emptyDir size limit"
      }
    }
  }
}
Notes - This variables block only defines your new placeholders. The existing standard variables remain as-is on an existing pool. If you’re creating a brand new pool with this JSON, it’s safest to start from the default template and append these two properties to its variables block:
Copy code
prefect work-pool get-default-base-job-template --type kubernetes > base.json
# add the volumes/workingDir edits and the two variable properties above, then:
prefect work-pool update <your-pool> --base-job-template base.json
Then you can set them per deployment in prefect.yaml:
Copy code
deployments:
  - name: my-deployment
    entrypoint: flows/flow.py:flow
    work_pool:
      name: my-k8s-pool
      job_variables:
        working_dir: /opt/prefect/work
        workdir_size_limit: 100Mi
If you don’t need configurability, hardcode the path and size in the manifest and skip the variables block entirely. Less moving parts, fewer things to break.