Kate
10/09/2025, 1:33 PMMarvin
10/09/2025, 1:33 PMMarvin
10/09/2025, 1:36 PMapiVersion: apps/v1
kind: Deployment
metadata:
name: prefect-worker
spec:
template:
spec:
containers:
- name: worker
image: prefecthq/prefect:3.4.4-python3.11
env:
- name: PREFECT_API_URL
value: <http://prefect-api.prefect.svc.cluster.local/api>
- name: PREFECT_LOGGING_LEVEL
value: INFO
- name: PREFECT_LOGGING_LOG_PRINTS
value: "true"
- name: PREFECT_LOGGING_TO_API_ENABLED
value: "true"
- name: PREFECT_LOGGING_TO_API_MAX_LOG_SIZE
value: "1000000"
2) Ensure flow-run pods inherit the same env vars
- For Kubernetes work pools, add these environment variables in the work pool’s job/pod template. This ensures the worker puts them into each Job it creates for your flows. Env vars on the worker container do not automatically propagate to flow pods unless included in the job template.
Example (pod template fragment you can add in your work pool job template):
spec:
template:
spec:
containers:
- name: flow
env:
- name: PREFECT_LOGGING_LEVEL
value: DEBUG
- name: PREFECT_LOGGING_LOG_PRINTS
value: "true"
- name: PREFECT_LOGGING_TO_API_ENABLED
value: "true"
3) Use a custom logging config file (optional, for full control)
- Create a ConfigMap with a logging.yaml and mount it into both your worker and flow-run pods. Then set PREFECT_LOGGING_CONFIG_PATH to that path.
ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: prefect-logging
data:
logging.yml: |
version: 1
formatters:
standard:
format: "%(asctime)s | %(levelname)s | %(name)s - %(message)s"
handlers:
console:
class: logging.StreamHandler
level: INFO
formatter: standard
root:
level: INFO
handlers: [console]
Mount and reference it:
```
...
volumes:
- name: prefect-logging
configMap:
name: prefect-logging
containers:
- name: worker
volumeMounts:
- name: prefect-logging
mountPath: /etc/prefect
env:Marvin
10/09/2025, 1:36 PMKate
10/09/2025, 1:45 PMMarvin
10/09/2025, 1:45 PMMarvin
10/09/2025, 1:46 PMPREFECT_LOGGING_TO_API_ENABLED=true in the flow pods.
- Worker logs: will not be affected. They’ll stay at the default config and go to the worker pod’s stdout/stderr. They will not show up in the UI unless you also enable API logging on the worker.
When this is fine
- If you only care about logs attached to flow runs in the UI, configuring just the flow pods is sufficient.
When you should set both
- If you want to see worker activity (polling, job submission, infra errors that happen before a flow pod starts, etc.) centrally or in the UI, set the same env vars on the worker too. Otherwise, you’ll need to use kubectl logs on the worker pod to debug those issues.
Using the official Helm charts
- Put env vars on the worker container in your worker chart values so the worker itself is configured.
- Put env vars into the work pool’s base job/pod template (via the worker chart’s work pool config) so every flow-run pod inherits them.
Docs
- Settings and profiles
- Logging customization
- Deploy a worker using Helm
If you can share your worker chart values (or how you’re defining the work pool/pod template), I’ll give you an exact values.yaml snippet you can paste.