<@ULVA73B9P> I'm running Prefect 3.4.4 OSS on Kube...
# ask-marvin
k
@Marvin I'm running Prefect 3.4.4 OSS on Kubernetes. Previously I was setting the logging settings in profiles.toml. How do I do it in the cluster?
m
thought for 210.7 seconds
Great question. In Prefect 3.x, the recommended way to configure logging in Kubernetes is via environment variables on your worker and flow-run pods (instead of relying on a local profiles.toml). Profiles still work locally, but in-cluster env vars are simpler, consistent, and take precedence. Key logging settings you’ll likely want to set - PREFECT_LOGGING_LEVEL: DEBUG | INFO | WARNING | ERROR | CRITICAL - PREFECT_LOGGING_LOG_PRINTS: "true" to capture print() as logs - PREFECT_LOGGING_COLORS: "false" to disable color codes - PREFECT_LOGGING_MARKUP: "true" to allow rich markup - PREFECT_LOGGING_EXTRA_LOGGERS: comma-separated list of logger names to attach Prefect handlers to - PREFECT_LOGGING_CONFIG_PATH: path to a custom logging YAML for full control - PREFECT_LOGGING_TO_API_ENABLED: "true"/"false" - PREFECT_LOGGING_TO_API_BATCH_INTERVAL: seconds between batches (float) - PREFECT_LOGGING_TO_API_BATCH_SIZE: max batch size in bytes (int) - PREFECT_LOGGING_TO_API_MAX_LOG_SIZE: truncate overly long log records to this many bytes (int) - PREFECT_LOGGING_TO_API_WHEN_MISSING_FLOW: warn | error | ignore Docs: - Settings and profiles - Logging customization - Kubernetes deployment examples How to apply this in Kubernetes 1) Set env vars on your Prefect worker Deployment (controls worker logs) - If you deploy a worker via Helm, add these under the worker container env section in your values. If you deploy via raw manifests, add them to the Deployment spec. Example (Deployment manifest fragment):
Copy code
apiVersion: apps/v1
kind: Deployment
metadata:
  name: prefect-worker
spec:
  template:
    spec:
      containers:
        - name: worker
          image: prefecthq/prefect:3.4.4-python3.11
          env:
            - name: PREFECT_API_URL
              value: <http://prefect-api.prefect.svc.cluster.local/api>
            - name: PREFECT_LOGGING_LEVEL
              value: INFO
            - name: PREFECT_LOGGING_LOG_PRINTS
              value: "true"
            - name: PREFECT_LOGGING_TO_API_ENABLED
              value: "true"
            - name: PREFECT_LOGGING_TO_API_MAX_LOG_SIZE
              value: "1000000"
2) Ensure flow-run pods inherit the same env vars - For Kubernetes work pools, add these environment variables in the work pool’s job/pod template. This ensures the worker puts them into each Job it creates for your flows. Env vars on the worker container do not automatically propagate to flow pods unless included in the job template. Example (pod template fragment you can add in your work pool job template):
Copy code
spec:
  template:
    spec:
      containers:
        - name: flow
          env:
            - name: PREFECT_LOGGING_LEVEL
              value: DEBUG
            - name: PREFECT_LOGGING_LOG_PRINTS
              value: "true"
            - name: PREFECT_LOGGING_TO_API_ENABLED
              value: "true"
3) Use a custom logging config file (optional, for full control) - Create a ConfigMap with a logging.yaml and mount it into both your worker and flow-run pods. Then set PREFECT_LOGGING_CONFIG_PATH to that path. ConfigMap:
Copy code
apiVersion: v1
kind: ConfigMap
metadata:
  name: prefect-logging
data:
  logging.yml: |
    version: 1
    formatters:
      standard:
        format: "%(asctime)s | %(levelname)s | %(name)s - %(message)s"
    handlers:
      console:
        class: logging.StreamHandler
        level: INFO
        formatter: standard
    root:
      level: INFO
      handlers: [console]
Mount and reference it: ``` ... volumes: - name: prefect-logging configMap: name: prefect-logging containers: - name: worker volumeMounts: - name: prefect-logging mountPath: /etc/prefect env:
- name: PREFECT_LOGGING_CONFIG_PATH value: /etc/prefect/logging.yml ``` Repeat the same volume/env in the work pool’s pod template so flow pods use the same config. Notes and precedence - Environment variables override profiles and other sources, which is why they’re ideal for Kubernetes. - If you only set env on the worker, your flow pods won’t see them unless the work pool job template includes them. Set both if you want consistent behavior. Quick check: are you deploying your worker via Helm or raw manifests, and do you want these settings for the worker, the flow-run pods, or both? If you share how you’re creating the work pool/job template, I can give you an exact snippet to paste.
k
@Marvin I use official prefect helm charts. "Do you want these settings for the worker, the flow-run pods, or both?" - I'm not sure. What happens if I only set these variables for flow-run pods?
m
thought for 70.1 seconds
Short answer: if you only set the logging env vars on your flow-run pods, only those pods will honor them. Your worker will keep its defaults. That means: - Flow-run logs: will follow whatever you set (level, prints-to-logs, to-API, custom config, etc.) and appear in the Prefect UI if
PREFECT_LOGGING_TO_API_ENABLED=true
in the flow pods. - Worker logs: will not be affected. They’ll stay at the default config and go to the worker pod’s stdout/stderr. They will not show up in the UI unless you also enable API logging on the worker. When this is fine - If you only care about logs attached to flow runs in the UI, configuring just the flow pods is sufficient. When you should set both - If you want to see worker activity (polling, job submission, infra errors that happen before a flow pod starts, etc.) centrally or in the UI, set the same env vars on the worker too. Otherwise, you’ll need to use kubectl logs on the worker pod to debug those issues. Using the official Helm charts - Put env vars on the worker container in your worker chart values so the worker itself is configured. - Put env vars into the work pool’s base job/pod template (via the worker chart’s work pool config) so every flow-run pod inherits them. Docs - Settings and profiles - Logging customization - Deploy a worker using Helm If you can share your worker chart values (or how you’re defining the work pool/pod template), I’ll give you an exact values.yaml snippet you can paste.