Oleksandr Riabyi
08/15/2025, 3:45 PMMarvin
08/15/2025, 3:45 PMMarvin
08/15/2025, 3:47 PMPREFECT_LOGGING_ROOT_LEVEL
(not PREFECT_LOGGING_LEVEL
), but it does not natively support PREFECT_LOGGING_HANDLERS_CONSOLE_LEVEL
. Two workable patterns:
Option A — Use a custom logging config file and inject your env vars
- Point Prefect at a dictConfig file with PREFECT_LOGGING_SETTINGS_PATH
- Template the console handler level from an env var at runtime (entrypoint script, init container, or CI step)
Example entrypoint (POSIX shells):
#!/usr/bin/env sh
set -eu
# Back-compat for 2.x-style variable name if you’re still setting it
: "${PREFECT_LOGGING_ROOT_LEVEL:=${PREFECT_LOGGING_LEVEL:-INFO}}"
: "${PREFECT_LOGGING_HANDLERS_CONSOLE_LEVEL:=WARNING}"
cat >/tmp/prefect-logging.yml <<'YAML'
version: 1
disable_existing_loggers: false
formatters:
console:
format: "%(asctime)s | %(levelname)-8s | %(name)s - %(message)s"
handlers:
console:
class: logging.StreamHandler
stream: <ext://sys.stderr>
level: ${PREFECT_LOGGING_HANDLERS_CONSOLE_LEVEL}
formatter: console
loggers:
prefect:
level: ${PREFECT_LOGGING_ROOT_LEVEL}
handlers: [console]
propagate: false
root:
level: ${PREFECT_LOGGING_ROOT_LEVEL}
handlers: [console]
YAML
export PREFECT_LOGGING_SETTINGS_PATH=/tmp/prefect-logging.yml
exec "$@"
- Set PREFECT_LOGGING_ROOT_LEVEL=INFO
(global) and PREFECT_LOGGING_HANDLERS_CONSOLE_LEVEL=WARNING
(console) and you’re done.
- If you prefer, pre-render this file in your build process or use envsubst
. The important bit is that Prefect reads the file via PREFECT_LOGGING_SETTINGS_PATH
.
Option B — Programmatically override the console handler early
If you can run a small bootstrap before your flows:
import logging
import os
def set_console_level_from_env(var="PREFECT_LOGGING_HANDLERS_CONSOLE_LEVEL", default="WARNING"):
level_name = os.getenv(var, default).upper()
level = getattr(logging, level_name, <http://logging.INFO|logging.INFO>)
# Tweak both the 'prefect' logger and the root, catching the console StreamHandler(s)
for name in ("prefect", ""):
logger = logging.getLogger(name)
for h in list(logger.handlers):
if isinstance(h, logging.StreamHandler):
h.setLevel(level)
# Call this as early as possible in your process, before flows start logging
set_console_level_from_env()
Notes and gotchas
- In Prefect 3.x, the supported global env var is PREFECT_LOGGING_ROOT_LEVEL
. If you’re used to PREFECT_LOGGING_LEVEL
from 2.x, map it as shown above or update your env.
- Prefect does not read a PREFECT_LOGGING_HANDLERS_CONSOLE_LEVEL
setting on its own; you must use one of the patterns above.
- Option A is the most robust for workers/containers because it centralizes logging in one config file and avoids ordering issues.
Docs for reference:
- Logging customization: Prefect 3 docs – Logging customization
- Logging handlers API: Prefect 3 API – Logging handlers
If you tell me how you’re running flows (local, Docker, K8s, agents/workers), I’ll tailor the minimal setup for that environment. I live to serve. Begrudgingly.Oleksandr Riabyi
08/15/2025, 3:50 PMMarvin
08/15/2025, 3:54 PMMarvin
08/15/2025, 3:55 PMPREFECT_LOGGING_SETTINGS_PATH
. I’m not asking you to rewrite files for sport.
Why I suggested “rewriting” (rendering) the file: Prefect 3 will load your dictConfig as-is; it does not interpolate environment variables inside that YAML/JSON. Since Prefect doesn’t support PREFECT_LOGGING_HANDLERS_CONSOLE_LEVEL
natively, the only way to drive the console handler level from an env var is to render it into the file before Prefect reads it. On Kubernetes, that’s easy with an initContainer or a tiny entrypoint.
If you’re fine with a static level in the file, just mount it and set the path. No rewriting needed.
Kubernetes snippets
1) Static file (no rewrite)
- Create a ConfigMap with your final logging config:
apiVersion: v1
kind: ConfigMap
metadata:
name: prefect-logging
data:
logging.yml: |
version: 1
disable_existing_loggers: false
formatters:
console:
format: "%(asctime)s | %(levelname)-8s | %(name)s - %(message)s"
handlers:
console:
class: logging.StreamHandler
stream: <ext://sys.stderr>
level: WARNING # console handler level
formatter: console
loggers:
prefect:
level: INFO # global Prefect logger level
handlers: [console]
propagate: false
root:
level: INFO # global Python root level
handlers: [console]
- Mount it and point Prefect at it:
env:
- name: PREFECT_LOGGING_SETTINGS_PATH
value: /etc/prefect/logging.yml
- name: PREFECT_LOGGING_ROOT_LEVEL
value: INFO
volumeMounts:
- name: prefect-logging
mountPath: /etc/prefect
volumes:
- name: prefect-logging
configMap:
name: prefect-logging
items:
- key: logging.yml
path: logging.yml
2) Single template + env-driven levels (rewrite via initContainer)
- ConfigMap with a template that references env vars:
apiVersion: v1
kind: ConfigMap
metadata:
name: prefect-logging-templates
data:
logging.tpl.yml: |
version: 1
disable_existing_loggers: false
formatters:
console:
format: "%(asctime)s | %(levelname)-8s | %(name)s - %(message)s"
handlers:
console:
class: logging.StreamHandler
stream: <ext://sys.stderr>
level: ${PREFECT_LOGGING_HANDLERS_CONSOLE_LEVEL:-WARNING}
formatter: console
loggers:
prefect:
level: ${PREFECT_LOGGING_ROOT_LEVEL:-INFO}
handlers: [console]
propagate: false
root:
level: ${PREFECT_LOGGING_ROOT_LEVEL:-INFO}
handlers: [console]
- InitContainer renders it with envsubst, then the main container uses it:
initContainers:
- name: render-logging
image: alpine:3.20
command: ["/bin/sh","-c"]
args:
- "apk add --no-cache gettext >/dev/null && envsubst < /templates/logging.tpl.yml > /rendered/logging.yml"
env:
- name: PREFECT_LOGGING_ROOT_LEVEL
value: "INFO"
- name: PREFECT_LOGGING_HANDLERS_CONSOLE_LEVEL
value: "WARNING"
volumeMounts:
- name: logging-templates
mountPath: /templates
- name: logging-rendered
mountPath: /rendered
containers:
- name: prefect
env:
- name: PREFECT_LOGGING_SETTINGS_PATH
value: /etc/prefect/logging.yml
volumeMounts:
- name: logging-rendered
mountPath: /etc/prefect
volumes:
- name: logging-templates
configMap:
name: prefect-logging-templates
- name: logging-rendered
emptyDir: {}
Summary
- If you want a single, static file: mount it and set PREFECT_LOGGING_SETTINGS_PATH
. Done.
- If you want to control the console handler level by environment variable per deployment: render the file first (initContainer or entrypoint), because Prefect won’t expand env vars inside the logging config on its own.
Docs, for when insomnia strikes:
- Prefect 3 – Logging customizationMarvin
08/15/2025, 3:55 PMOleksandr Riabyi
08/18/2025, 3:24 PMMarvin
08/18/2025, 3:24 PMMarvin
08/18/2025, 3:28 PMwith_options(...)
.
In Prefect 3, a deployment stores an entrypoint to your flow function (module:path) and, at run time, the worker re-imports that function. The Flow instance you created with debug_ml_template_test.with_options(...)
is not what actually runs in a deployment, so its flow_run_name
never takes effect.
How to set the run name for deployment-triggered runs:
- Put it on the flow decorator so it’s part of the importable function
from datetime import datetime
from prefect import flow
def run_name():
# compute anything you want here (time, parameters via get_run_context, etc.)
return f"ALEG_{datetime.utcnow():%Y%m%d_%H%M%S}"
@flow(name="Debug simple flow", flow_run_name=run_name)
def debug_simple_flow(message: str | None = None):
...
- Or set the name when you trigger the deployment (UI/API/CLI). Runs started via the UI have a “Run name” field; the Python helper to trigger a deployment also accepts a run name.
Why your snippet likely didn’t work:
- with_options(flow_run_name=...)
only affects ad-hoc executions of that specific Flow object.
- f"ALEG_{flow.name}"
also looks suspect: there’s no flow
in scope unless you meant the function/Flow itself or you’re pulling the name from context.
Docs, so you don’t have to take my word for it:
- Deployments in Prefect 3
- Flows
If you want the run name to depend on parameters (e.g., message
), define a run_name()
callable and use get_run_context()
inside it to read ctx.parameters
. That will run at flow start and set the name accordingly.Oleksandr Riabyi
08/18/2025, 3:33 PMMarvin
08/18/2025, 3:33 PMMarvin
08/18/2025, 3:34 PMwith_options(...)
. In Prefect 3, a deployment stores an entrypoint (module:path). When a run starts, the worker re-imports that entrypoint and constructs a fresh Flow from the decorated function. Any ephemeral tweaks you made to a Flow object during deployment (like with_options(flow_run_name=...)
) are lost unless they’re part of the importable entrypoint.
What works
Option A — set it on the flow itself (simplest)
from datetime import datetime
from prefect import flow
def run_name():
return f"ALEG_{datetime.utcnow():%Y%m%d_%H%M%S}"
@flow(name="ML template test", flow_run_name=run_name)
def debug_ml_template_test(sleep_iterations: int = 60):
...
Pros: straightforward. Cons: you said you want to keep deployments centralized without editing each flow.
Option B — use an entrypoint “shim” that exports the flow with options applied
Create a tiny module that imports the flow and re-exports a Flow with the run name configured. Point your deployment at this shim as the entrypoint.
# entrypoints/debug_ml_template_entrypoint.py
from flows.debug.debug_ml_template import debug_ml_template_test as base_flow
# Use a callable so it’s evaluated at run time
debug_ml_template_test = base_flow.with_options(
flow_run_name=lambda: f"ALEG_{base_flow.name}"
)
Then deploy using the shim as the entrypoint (so the worker imports the wrapped Flow, not the original):
from prefect import Flow
Flow.from_source(
source="file:///Users/admin/PycharmProjects/prefect",
entrypoint="entrypoints/debug_ml_template_entrypoint.py:debug_ml_template_test",
).deploy(
name="ml-template-test",
work_pool_name="default",
# ... other args
)
Option C — set the run name when triggering the run
- UI: the “Run name” field when you click Run on a deployment
- Python/API: pass a run name when calling the deployment (useful for ad‑hoc runs, not schedules)
Notes
- flow.with_options(...).deploy(...)
doesn’t persist the run name into the deployment because the deployment only stores how to import your flow, not the modified object you created in-process.
- If you want the name to include parameters, use a callable and get_run_context()
inside it:
from prefect import get_run_context
def run_name():
ctx = get_run_context()
p = ctx.parameters
return f"ALEG_{ctx.flow.name}_{p.get('sleep_iterations','na')}"
Docs for reference:
- Deployments
- Flows
If you tell me whether you can touch the flow modules, I’ll suggest the cleanest of these patterns for your setup. I’ll try to contain my excitement.Nate
08/18/2025, 7:09 PMMarvin
08/18/2025, 7:09 PMMarvin
08/18/2025, 7:11 PMwith_options(flow_run_name=...)
before .deploy()
doesn’t carry into deployment-triggered runs
- How to set run names reliably: on the decorated flow, via an entrypoint shim, or per-run when triggering
- A request for confirmation/docs clarity
If you want me to add your exact code and use case details, paste them here and I’ll update the thread.Oleksandr Riabyi
08/20/2025, 7:54 AMNate
08/20/2025, 2:21 PMOleksandr Riabyi
08/20/2025, 2:23 PMNate
08/20/2025, 2:24 PM