<@ULVA73B9P> Logging is getting shut down during m...
# ask-marvin
r
@Marvin Logging is getting shut down during my flow. At a certain task, all logging stops. Have you seen this before?
m
thought for 297.9 seconds
Yes—unfortunately, I’ve seen a few ways a perfectly good logger goes quiet mid-run. Assuming you’re on Prefect 3.x, here are the usual suspects and quick checks: Common causes - Kubernetes log streaming idle timeout: if you’re running with a Kubernetes worker and a task is quiet for ~10 minutes, the worker’s log stream can drop while the job keeps running. You’ll see something like “Error occurred while streaming logs - Job will continue to run but logs will no longer be streamed to stdout.” It’s a known issue; details here: Issue #16210. - Workarounds: emit a heartbeat log every few minutes from long/quiet tasks, or ensure you’re relying on API logs (visible in the UI) instead of just the worker’s stdout stream. - Logging reconfigured or shut down inside a task: calls to
logging.basicConfig(force=True)
,
logging.getLogger().handlers.clear()
, or a library that fiddles with global logging can remove Prefect’s handlers mid-run. - Fix: remove/guard your logging reconfig, or let Prefect own logging. If you must customize, do it once at process start and avoid
force=True
. -
disable_run_logger()
used (directly or via a helper): this disables the run logger for the current context. - Fix: remove it and use
from prefect.logging import get_run_logger
within tasks. - Child processes/threads: logs from a subprocess won’t have Prefect’s run context and may not show up unless you explicitly route them (capture stdout/stderr, or use the run logger in the child process). - “Prints stopped being logged” vs “logs stopped”: if only prints are missing, confirm your print-to-log configuration in Prefect 3.x (see logging settings). Quick diagnostics you can drop into the suspect task
Copy code
import logging
from prefect.logging import get_run_logger
import prefect

def inspect_logging_state():
    rlog = get_run_logger()
    <http://rlog.info|rlog.info>(f"Prefect {prefect.__version__} - inspecting logging state")
    <http://rlog.info|rlog.info>(f"Root handlers: {logging.getLogger().handlers}")
    <http://rlog.info|rlog.info>(f\"Prefect logger handlers: {logging.getLogger('prefect').handlers}\")
- If handlers are empty at this point, something is reconfiguring or shutting down logging in your code or a dependency. Helpful docs - Prefect 3 logging customization and settings: Logging customization (Prefect 3) To help pinpoint your case, can you share: - How you’re running (worker type: Kubernetes/Docker/Process; Prefect Cloud/Server; versions of Prefect and any integrations like prefect-kubernetes) - Whether you see the “Error occurred while streaming logs” message in worker logs around the time logging stops - Whether the “quiet” task spawns a subprocess or runs a long external command - Any logging configuration you apply in your flow/tasks (basicConfig, loguru, etc.) With that, I can suggest a precise fix instead of just waving a wrench at it.
r
@Marvin We're running in Kubernetes using a prefect v3 k8s worker. The flow is sourced from a Docker image. We do not see the 'error occurred while streaming logs'. Though, I haven't looked at the pod logs or pod status.
m
thought for 108.6 seconds
Thanks — that narrows it down. Given you’re on a Prefect 3.x Kubernetes worker with a Docker-sourced flow, there are two separate logging paths at play: - Flow pod → Prefect API (logs show in the UI) - Worker → streams the flow pod’s logs to its own stdout When logging “stops,” it’s usually one of: 1) Worker’s log stream dropped, but the flow pod kept logging to the API (logs still appear in the UI). 2) The flow pod actually stopped emitting logs (UI logs also stop), often due to logging being reconfigured/cleared inside a task or a quiet, long-running subprocess. Quick checks - Do UI logs stop at the same time? - If UI logs continue, it’s just the worker’s stream. Check the worker pod logs for a streaming error. - If UI logs also stop, the flow pod stopped logging (see “Inside the task” below). - Inspect the worker pod logs for streaming errors - kubectl logs deploy/<your-worker-deployment> -n <ns> - Look around the time logs stop. There’s a known idle-stream issue around 10 minutes of silence; details: Issue #16210. - Inspect the flow pod logs directly - kubectl logs job/<your-flow-job> -n <ns> --all-containers - If you see logs here but not in the worker output, it’s the stream. If you see no logs, the pod isn’t producing any. Inside the task (common culprits) - Long, quiet work: If a task runs for >10 minutes without emitting logs, the worker’s stream can drop even though the task is fine. Add a small heartbeat:
Copy code
from prefect import task
  from prefect.logging import get_run_logger
  import time

  @task
  def long_task():
      log = get_run_logger()
      for _ in range(12):  # ~12 minutes
          time.sleep(60)
          <http://log.info|log.info>("still working...")
      # actual work...
- Subprocess output is not streamed: If the task runs a CLI, make sure you stream stdout line-by-line and log it:
Copy code
import subprocess
  from prefect import task
  from prefect.logging import get_run_logger

  @task
  def run_cmd(cmd):
      log = get_run_logger()
      p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, text=True, bufsize=1)
      for line in p.stdout:
          <http://log.info|log.info>(line.rstrip())
      return p.wait()
- Logging reconfigured mid-run: Calls like
logging.basicConfig(force=True)
,
logging.getLogger().handlers.clear()
, or libraries that tamper with logging can drop Prefect’s handlers. - Drop this into the suspect task to verify:
Copy code
import logging
  from prefect.logging import get_run_logger

  @task
  def inspect_logging():
      log = get_run_logger()
      <http://log.info|log.info>(f"root handlers: {logging.getLogger().handlers}")
      <http://log.info|log.info>(f"prefect handlers: {logging.getLogger('prefect').handlers}")
If handlers vanish before/after the task, something is shutting them down. -
disable_run_logger()
was called (directly or indirectly). Remove it. Helpful docs - Logging customization (Prefect 3): Docs If you can share: - Exact versions: Prefect core and prefect-kubernetes - Whether UI logs stop at the same moment as worker stdout - A snippet of what that “quiet” task does (subprocess? long CPU-bound loop? any logging setup?) - Any worker pod log lines around the time logging stops …then I can point at the precise fix rather than mumbling about the void.