<@ULVA73B9P> I have a self-hosted prefect server a...
# marvin-ai
l
@Marvin I have a self-hosted prefect server and a worker ( on Rocky Linux 9.6) and i am missing logs from my flows. They run and complete correctly but i can se from the loggers that the last call to send logs to the API is not present. I can run the exact same flow on my Windows machine against the same remote server and the logs are present along with the API call to send them. This looks like an edge config issue to me, what could that be (PREFECT_LOGGING_TO_API_ENABLED='True' in both places)?
m
thought for 381.2 seconds
Thanks for the details — this does sound like a “final log batch didn’t flush before process exit” issue rather than an API or server problem, especially since the same flow logs correctly from Windows against the same server. In Prefect 3.x, logs are batched and sent to the API on a short interval by a background log worker. If the flow process exits quickly after the last log, the last batch can be dropped on some Linux setups unless there’s a moment for the background worker to flush. Key settings to check and tune - PREFECT_LOGGING_TO_API_ENABLED=true - PREFECT_LOGGING_TO_API_BATCH_INTERVAL=2.0 (default, seconds) — try lowering to 0.2–0.5 - PREFECT_LOGGING_TO_API_BATCH_SIZE=4000000 (default, bytes) Docs: - Settings reference - Logging handlers overview What I’d try 1) Verify the settings exactly as seen by the worker/flow process on Rocky - Run this in the same environment as the worker/flow:
Copy code
prefect config view --show-defaults --show-sources
Confirm that
PREFECT_LOGGING_TO_API_ENABLED
is true and note the values for the two batching settings. Environment values should appear with source “env”. 2) Reduce the batch interval on the Linux worker - Temporarily set:
Copy code
export PREFECT_LOGGING_TO_API_BATCH_INTERVAL=0.2
Then restart the worker and re-run the flow. If the “missing last call” disappears, it’s a flush timing issue. 3) Add a quick test delay at the very end of your flow (just to confirm diagnosis) - For async flows:
Copy code
from prefect import flow, get_run_logger
  import anyio

  @flow
  async def demo():
      logger = get_run_logger()
      <http://logger.info|logger.info>("End of flow")
      # give the log worker a moment to flush
      await anyio.sleep(0.5)
If this makes the last logs arrive, it confirms the flush timing hypothesis. You can remove it after tuning the batch interval. 4) Ensure the worker/flow process gets a graceful shutdown window - If you’re running the worker under systemd, make sure the service allows a clean stop: - e.g.,
TimeoutStopSec=30
and avoid anything that would send an immediate SIGKILL at flow end. - If your flow runs in a subprocess or container, ensure the subprocess/container isn’t being terminated too aggressively at completion. 5) Edge/reverse proxy checks (if you still suspect it) - Since earlier log batches arrive, an edge issue is less likely. Still, ensure the proxy isn’t closing idle connections too aggressively: - Keepalive timeout > 5s - No special buffering/blocking on small POST requests to
/api/logs/batch
- If you can, check the proxy access logs to see if the final POST ever arrives. Helpful diagnostics - Temporarily set:
Copy code
export PREFECT_LOGGING_LEVEL=DEBUG
Then look for worker/flow logs that indicate log batch sends. Compare Rocky vs Windows behavior around the last seconds of the run. - Capture the worker logs around the end of the flow run to see if there’s any indication of pending logs on exit. If you can share: - Prefect version on both machines (
prefect version
) - How the worker is started on Rocky (systemd service? container? direct shell?) - Whether the flows run in containers/subprocesses - The values of
prefect config view --show-defaults --show-sources
on the Rocky box (redact secrets) I can help narrow it down further and recommend a permanent fix. References: - Prefect 3 settings reference - Logging handlers (API to API)
l
@Marvin I have tried to lower the batch interval to 0.2 seconds and added a sleep of 15 seconds and still there is no call to send the logs, the last output is
2025-10-03 12:33:43,783 | INFO | prefect.flow_runs | Finished in state Completed().
I am using prefect version 3.4.20 and i am currently debugging by directly executing the flow in the environment by calling
python test_flow.py
.
m
Your message was too long, here's your message at the allowed limit: I have tried to lower the batch interval to 0 2 seconds and added a sleep of 15 seconds and still there is no call to send the logs the last output is
2025-10-03 12 33 43 783 | INFO | prefect flow_runs | Finished in state Completed()
I am using prefect version 3 4 20 and i am currently debugging by directly executing the flow in the environment by calling
python test_flow py
Here are the config values
rocket you are connected to
<http //redacted>
PREFECT_PROFILE='ephemeral'
PREFECT_API_AUTH_STRING='****' (from defaults)
PREFECT_API_BLOCKS_REGISTER_ON_START='True' (from defaults)
PREFECT_API_DATABASE_CONNECTION_TIMEOUT='5 0' (from defaults)
PREFECT_API_DATABASE_CONNECTION_URL='****' (from defaults)
PREFECT_API_DATABASE_DRIVER='None' (from defaults)
PREFECT_API_DATABASE_ECHO='False' (from defaults)
PREFECT_API_DATABASE_HOST='None' (from defaults)
PREFECT_API_DATABASE_MIGRATE_ON_START='True' (from defaults)
PREFECT_API_DATABASE_NAME='None' (from defaults)
PREFECT_API_DATABASE_PASSWORD='****' (from defaults)
PREFECT_API_DATABASE_PORT='None' (from defaults)
PREFECT_API_DATABASE_TIMEOUT='10 0' (from defaults)
PREFECT_API_DATABASE_USER='None' (from defaults)
PREFECT_API_DEFAULT_LIMIT='200' (from defaults)
PREFECT_API_ENABLE_HTTP2='False' (from defaults)
PREFECT_API_ENABLE_METRICS='False' (from defaults)
PREFECT_API_EVENTS_RELATED_RESOURCE_CACHE_TTL='0 05 00' (from defaults)
PREFECT_API_EVENTS_STREAM_OUT_ENABLED='True' (from defaults)
PREFECT_API_KEY='****' (from defaults)
PREFECT_API_LOG_RETRYABLE_ERRORS='False' (from defaults)
PREFECT_API_MAX_FLOW_RUN_GRAPH_ARTIFACTS='10000' (from defaults)
PREFECT_API_MAX_FLOW_RUN_GRAPH_NODES='10000' (from defaults)
PREFECT_API_REQUEST_TIMEOUT='60 0' (from defaults)
PREFECT_API_SERVICES_CANCELLATION_CLEANUP_ENABLED='True' (from defaults)
PREFECT_API_SERVICES_CANCELLATION_CLEANUP_LOOP_SECONDS='20 0' (from defaults)
PREFECT_API_SERVICES_EVENT_LOGGER_ENABLED='False' (from defaults)
PREFECT_API_SERVICES_EVENT_PERSISTER_BATCH_SIZE='20' (from defaults)
PREFECT_API_SERVICES_EVENT_PERSISTER_ENABLED='True' (from defaults)
PREFECT_API_SERVICES_EVENT_PERSISTER_FLUSH_INTERVAL='5 0' (from defaults)
PREFECT_API_SERVICES_FOREMAN_DEPLOYMENT_LAST_POLLED_TIMEOUT_SECONDS='60' (from defaults)
PREFECT_API_SERVICES_FOREMAN_ENABLED='True' (from defaults)
PREFECT_API_SERVICES_FOREMAN_FALLBACK_HEARTBEAT_INTERVAL_SECONDS='30' (from defaults)
PREFECT_API_SERVICES_FOREMAN_INACTIVITY_HEARTBEAT_MULTIPLE='3' (from defaults)
PREFECT_API_SERVICES_FOREMAN_LOOP_SECONDS='15 0' (from defaults)
PREFECT_API_SERVICES_FOREMAN_WORK_QUEUE_LAST_POLLED_TIMEOUT_SECONDS='60' (from defaults)
PREFECT_API_SERVICES_LATE_RUNS_AFTER_SECONDS='0 00 15' (from defaults)
PREFECT_API_SERVICES_LATE_RUNS_ENABLED='True' (from defaults)
PREFECT_API_SERVICES_LATE_RUNS_LOOP_SECONDS='5 0' (from defaults)
PREFECT_API_SERVICES_PAUSE_EXPIRATIONS_ENABLED='True' (from defaults)
PREFECT_API_SERVICES_PAUSE_EXPIRATIONS_LOOP_SECONDS='5 0' (from defaults)
PREFECT_API_SERVICES_SCHEDULER_DEPLOYMENT_BATCH_SIZE='100' (from defaults)
PREFECT_API_SERVICES_SCHEDULER_ENABLED='True' (from defaults)
PREFECT_API_SERVICES_SCHEDULER_INSERT_BATCH_SIZE='500' (from defaults)
PREFECT_API_SERVICES_SCHEDULER_LOOP_SECONDS='60 0' (from defaults)
PREFECT_API_SERVICES_SCHEDULER_MAX_RUNS='100' (from defaults)
PREFECT_API_SERVICES_SCHEDULER_MAX_SCHEDULED_TIME='100 days 0 00 00' (from defaults)
PREFECT_API_SERVICES_SCHEDULER_MIN_RUNS='3' (from defaults)
PREFECT_API_SERVICES_SCHEDULER_MIN_SCHEDULED_TIME='1 00 00' (from defaults)
PREFECT_API_SERVICES_TASK_RUN_RECORDER_ENABLED='True' (from defaults)
PREFECT_API_SERVICES_TRIGGERS_ENABLED='True' (from defaults)
PREFECT_API_SSL_CERT_FILE='None' (from defaults)
PREFECT_API_TASK_CACHE_KEY_MAX_LENGTH='2000' (from defaults)
`PREFECT_API_TLS_INSECURE_SKIP_VERIFY='False' (from
defaults)`
PREFECT_API_URL='<http //redacted/api>' (from env)
PREFECT_CLIENT_CSRF_SUPPORT_ENABLED='True' (from defaults)
PREFECT_CLIENT_ENABLE_METRICS='False' (from defaults)
PREFECT_CLIENT_MAX_RETRIES='5' (from defaults)
PREFECT_CLIENT_METRICS_PORT='4201' (from defaults)
PREFECT_CLIENT_RETRY_EXTRA_CODES='set()' (from defaults)
PREFECT_CLIENT_RETRY_JITTER_FACTOR='0 2' (from defaults)
PREFECT_CLI_COLORS='True' (from defaults)
PREFECT_CLI_PROMPT='None' (from defaults)
PREFECT_CLI_WRAP_LINES='True' (from defaults)
PREFECT_CLOUD_API_URL='<https //api prefect cloud/api>' (from defaults)
PREFECT_CLOUD_ENABLE_ORCHESTRATION_TELEMETRY='True' (from defaults)
PREFECT_CLOUD_UI_URL='<https //app prefect cloud>' (from defaults)
PREFECT_DEBUG_MODE='False' (from defaults)
PREFECT_DEFAULT_DOCKER_BUILD_NAMESPACE='None' (from defaults)
PREFECT_DEFAULT_RESULT_STORAGE_BLOCK='None' (from defaults)
PREFECT_DEFAULT_WORK_POOL_NAME='None' (from defaults)
PREFECT_DEPLOYMENT_CONCURRENCY_SLOT_WAIT_SECONDS='30 0' (from defaults)
PREFECT_DEPLOYMENT_SCHEDULE_MAX_SCHEDULED_RUNS='50' (from defaults)
PREFECT_EVENTS_EXPIRED_BUCKET_BUFFER='0 01 00' (from defaults)
PREFECT_EVENTS_MAXIMUM_LABELS_PER_RESOURCE='500' (from defaults)
PREFECT_EVENTS_MAXIMUM_RELATED_RESOURCES='100' (from defaults)
PREFECT_EVENTS_MAXIMUM_SIZE_BYTES='1500000' (from defaults)
PREFECT_EVENTS_MAXIMUM_WEBSOCKET_BACKFILL='0 15 00' (from defaults)
PREFECT_EVENTS_PROACTIVE_GRANULARITY='0 00 05' (from defaults)
PREFECT_EVENTS_RETENTION_PERIOD='7 days 0 00 00' (from defaults)
PREFECT_EVENTS_WEBSOCKET_BACKFILL_PAGE_SIZE='250' (from defaults)
PREFECT_EXPERIMENTAL_WARN='True' (from defaults)
PREFECT_EXPERIMENTS_LINEAGE_EVENTS_ENABLED='False' (from defaults)
PREFECT_FLOW_DEFAULT_RETRIES='0' (from defaults)
PREFECT_FLOW_DEFAULT_RETRY_DELAY_SECONDS='0' (from defaults)
PREFECT_HOME='/home/user/ prefect' (from defaults)
PREFECT_LOCAL_STORAGE_PATH='/home/user/ prefect/storage' (from defaults)
PREFECT_LOGGING_COLORS='True' (from defaults)
PREFECT_LOGGING_EXTRA_LOGGERS='[]' (from defaults)
PREFECT_LOGGING_INTERNAL_LEVEL='ERROR' (from defaults)
PREFECT_LOGGING_LEVEL='DEBUG' (from env)
PREFECT_LOGGING_LOG_PRINTS='False' (from defaults)
PREFECT_LOGGING_MARKUP='False' (from defaults)
PREFECT_LOGGING_SERVER_LEVEL='WARNING' (from defaults)
PREFECT_LOGGING_SETTINGS_PATH='/home/user/ prefect/logging yml' (from defaults)
PREFECT_LOGGING_TO_API_BATCH_INTERVAL='0 2' (from env)
PREFECT_LOGGING_TO_API_BATCH_SIZE='4000000' (from defaults)
PREFECT_LOGGING_TO_API_ENABLED='true' (from env)
PREFECT_LOGGING_TO_API_MAX_LOG_SIZE='1000000' (from defaults)
PREFECT_LOGGING_TO_API_WHEN_MISSING_FLOW='warn' (from defaults)
PREFECT_MEMOIZE_BLOCK_AUTO_REGISTRATION='True' (from defaults)
PREFECT_MEMO_STORE_PATH='/home/user/ prefect/memo_store toml' (from defaults)
PREFECT_MESSAGING_BROKER='prefect server utilities messaging memory' (from defaults)
PREFECT_MESSAGING_CACHE='prefect server utilities messaging memory' (from defaults)
PREFECT_PROFILES_PATH='/home/user/ prefect/profiles toml' (from defaults)
PREFECT_RESULTS_DEFAULT_SERIALIZER='pickle' (from defaults)
PREFECT_RESULTS_PERSIST_BY_DEFAULT='False' (from defaults)
PREFECT_RUNNER_HEARTBEAT_FREQUENCY='None' (from defaults)
PREFECT_RUNNER_POLL_FREQUENCY='10' (from defaults)
PREFECT_RUNNER_PROCESS_LIMIT='5' (from defaults)
PREFECT_RUNNER_SERVER_ENABLE='False' (from defaults)
PREFECT_RUNNER_SERVER_HOST='localhost' (from defaults)
PREFECT_RUNNER_SERVER_LOG_LEVEL='ERROR' (from defaults)
PREFECT_RUNNER_SERVER_MISSED_POLLS_TOLERANCE='2' (from defaults)
PREFECT_RUNNER_SERVER_PORT='8080' (from defaults)
PREFECT_SERVER_ALLOW_EPHEMERAL_MODE='true' (from profile)
PREFECT_SERVER_ANALYTICS_ENABLED='True' (from defaults)
PREFECT_SERVER_API_AUTH_STRING='****' (from defaults)
PREFECT_SERVER_API_BASE_PATH='None' (from defaults)
`PREFECT_SERVER_API_HOST='127
thought for 230.4 seconds
Thanks for the extra detail — that helps a lot. Given: - You’re running Prefect 3.4.20 - On Rocky you’re invoking the flow directly with
python test_flow.py
- You lowered the batch interval and even added a 15s sleep, but you never see a “send logs” call Two likely causes jump out: 1) The flow is not actually attached to an API-tracked run on Rocky (no flow_run_id ⇒ APILogHandler won’t send anything) 2) A logging configuration override on Rocky has removed/disabled the APILogHandler Let’s quickly confirm which it is. Quick checks 1) Does a flow run show up in the UI when you run
python test_flow.py
on Rocky? - If no, your script is running “local-only” and there’s no flow_run_id to attach logs to. Make sure
PREFECT_API_URL
is set in that shell and points to your server, e.g.
Copy code
export PREFECT_API_URL=http://<your-host>:<port>/prefect/api
Note: the trailing
/prefect/api
path is required for self-hosted. 2) Verify your active settings in the same shell you use to run the script:
Copy code
prefect config view --show-defaults --show-sources
Confirm: -
PREFECT_API_URL
is set (source=env or profile) and correct -
PREFECT_LOGGING_TO_API_ENABLED = true
- Look for
PREFECT_LOGGING_SETTINGS_PATH
and
PREFECT_HOME
3) Check if a custom logging config is overriding handlers on Rocky - Prefect will load a logging.yaml if present and it can remove the API handler. - Look for a file at one of these locations and see if it defines handlers without `prefect.logging.handlers.APILogHandler`: -
$PREFECT_HOME/logging.yaml
or
~/.prefect/logging.yaml
- If you find one, either remove/rename it to let Prefect’s default config load, or ensure the
prefect
logger includes a handler like:
Copy code
class: prefect.logging.handlers.APILogHandler
  level: INFO
4) Confirm the API log handler is actually attached at runtime Add this to the top of `test_flow.py`:
Copy code
import logging, prefect
  from prefect.settings import PREFECT_API_URL, PREFECT_LOGGING_TO_API_ENABLED
  print("Prefect:", prefect.__version__)
  print("API URL:", PREFECT_API_URL.value())
  print("Logging to API:", PREFECT_LOGGING_TO_API_ENABLED.value())
  print("Prefect logger handlers:", logging.getLogger("prefect").handlers)
You should see an APILogHandler instance in that handlers list when API logging is enabled and configured. Minimal test flow (ensures run context logger is used)
Copy code
from prefect import flow, get_run_logger

@flow
def test_flow():
    logger = get_run_logger()
    <http://logger.info|logger.info>("hello from run logger")

if __name__ == "__main__":
    test_flow()
- After running this on Rocky, you should see a flow run in the UI and logs for “hello from run logger”. - If you don’t see a flow run, your
PREFECT_API_URL
isn’t being picked up in that shell/session. - If you see a flow run but no logs, it’s almost certainly a logging config override removing the APILogHandler. Why the sleep and batch interval didn’t help - If the APILogHandler isn’t attached (because of a custom logging config or no run context), there will never be a “send logs” call to flush — so lowering the interval or sleeping won’t create one. If you can share: - Output of
prefect config view --show-defaults --show-sources
from Rocky (redact secrets) - Whether a flow run appears in the UI when running the test flow above - Whether a
logging.yaml
exists on Rocky (and its handler section) I’ll help you pinpoint the exact line item. References: - Settings reference: docs-3.prefect.io/v3/api-ref/settings-ref - Logging handlers overview: docs-3.prefect.io/v3/api-ref/python/prefect-logging-handlers
l
@Marvin I think i solved the problem and my suspicion is that the prefect logger didn't get instantiated at all after calling print("Prefect logger handlers:", logging.getLogger("prefect").dict) the logger got created and logs started to appear, could this be a bug?
m
thought for 53.2 seconds
Nice find! Yes—what you’re seeing points to a lazy logging initialization edge case. What likely happened - Prefect’s API log handler is attached when the “prefect” logger tree is initialized. - On your Rocky environment, that base logger didn’t get realized early enough, so the APILogHandler wasn’t installed and no “send logs” calls occurred—even though you saw console output like “Finished in state Completed()”. - When you accessed
logging.getLogger("prefect")
, it forced the logger to be created and Prefect’s logging setup kicked in, after which logs started going to the API. This is consistent with a lazy-init difference between environments and could be a bug. We’d love a minimal repro so we can fix it. Workarounds you can use now - Force logging setup early in your script:
Copy code
from prefect.logging.configuration import setup_logging
  setup_logging()  # ensures handlers (including API) are attached
- Or force-create the Prefect logger early:
Copy code
import prefect
  prefect.get_logger().debug("init prefect logging")
- Make sure logs that should reach the API use the run logger inside the flow:
Copy code
from prefect import flow, get_run_logger

  @flow
  def test_flow():
      logger = get_run_logger()
      <http://logger.info|logger.info>("hello from run logger")
Helpful diagnostics to include if you file an issue - Prefect version, Python version, OS details - Output of:
Copy code
prefect config view --show-defaults --show-sources
- Minimal script showing “no API logs until
getLogger('prefect')
is accessed” - Whether a custom
logging.yaml
exists in
$PREFECT_HOME
or
~/.prefect
References - Settings: docs-3.prefect.io/v3/api-ref/settings-ref - Logging handlers: docs-3.prefect.io/v3/api-ref/python/prefect-logging-handlers If you’re up for it, please open a GitHub issue with the repro so we can track and patch it: Create a Prefect issue. In the meantime, calling
setup_logging()
at the top of your script is the most reliable fix.