<@ULVA73B9P> while I'll executing multiple runs, t...
# marvin-ai
j
@Marvin while I'll executing multiple runs, the logs for each run is inconsistent. ie. some of the logs are missing in one run while it's printed on another run
m
thought for 133.3 seconds
Got it — inconsistent/missing logs between runs can happen for a few reasons. A few quick checks and fixes usually make them reliable. Questions to narrow it down - Where are you looking at logs? Prefect UI/Cloud, worker stdout, or both? - How are you running the flow? Locally vs via a deployment picked up by a worker? If a worker, which environment/container? - How are you emitting logs?
print
, Python
logging
, or
get_run_logger()
? Do you have
log_prints=True
set on the `@flow`/`@task`? - Any concurrency (e.g.,
ThreadPoolTaskRunner
/
ProcessPoolTaskRunner
)? - Are any tasks cached or skipped on those runs (e.g., result cache policy)? Cached/Skipped tasks won’t emit runtime logs. Common causes and fixes - Using
print
without `log_prints`: Prefect won’t capture prints unless
log_prints=True
is set on the `@flow`/`@task`. Also consider
print(..., flush=True)
to avoid buffering. - Stdout buffering in containers/processes: Set
PYTHONUNBUFFERED=1
in the worker environment (or run Python with
-u
) to reduce missing/delayed prints. - Prefer the run logger for important messages:
logger = get_run_logger(); <http://logger.info|logger.info>("...")
reliably ships to Prefect. - Different log levels across runs: Confirm
PREFECT_LOGGING_LEVEL
is consistent for all runs/workers. - Logs emitted before the run is “Running”: Prints/logging before the Prefect context initializes may not be captured. - Cached or skipped work: If a task is retrieved from cache, its inner logs won’t run on that execution. Check task states for “Completed (cached)” or “Skipped”. - Network hiccups (Cloud/Server): Intermittent network issues from worker to API can cause gaps; if you suspect this, we can enable DEBUG and inspect worker logs. Minimal example to verify capture
Copy code
from prefect import flow, task, get_run_logger

@task(log_prints=True)
def t(i):
    print(f"task print {i}", flush=True)
    get_run_logger().info(f"task log {i}")

@flow(log_prints=True)
def f(n: int = 3):
    for i in range(n):
        t.submit(i)
    print("flow print", flush=True)
    get_run_logger().info("flow log")

if __name__ == "__main__":
    f()
If this example is consistent but your flow isn’t, it’s likely environment/config related. What to try next (quick checklist) - Ensure
@flow(log_prints=True)
and
@task(log_prints=True)
where you use
print
. - Switch critical messages to
get_run_logger().info(...)
. - Set
PYTHONUNBUFFERED=1
on the worker/container and/or use
print(..., flush=True)
. - Set the same
PREFECT_LOGGING_LEVEL
for all runs; try
INFO
or
DEBUG
. - Check the Prefect UI log filters (levels/scope) and task states for caching/skips. - If using a process-based task runner or containers, confirm those env vars are present in that runtime, not just locally. Docs for reference - Logging customization - Loggers API - Logs model If you can share: - Prefect version, how the flow is executed (local vs deployment), worker/container details - Whether logs are from
print
vs
get_run_logger
- A small snippet that reproduces the missing logs …I can suggest a targeted fix.