<@ULVA73B9P> I am running a worker in ECS. Both th...
# ask-marvin
j
@Marvin I am running a worker in ECS. Both the worker and the container for executing the flow are using the prefect-3:latest image with Prefect 3.4.22. I am running a previously working flow that reports the contents of an s3 bucket through log statements using Prefect's get_run_logger(). The log statements from the flow are not appearing in the Prefect Cloud UI. Events like "Flow run running", "Block secret loaded", and "Flow run completed" appear in the Prefect UI but the contents of the logging statements do not, even if they are printed and log_prints is set to True. The Cloudwatch logs for the flow run show the content that should be logged to the Prefect Cloud UI. Are there known issues that might cause this and what are settings I should check on the worker to diagnose the issue?
m
thought for 393.2 seconds
Thanks for the detailed context — that helps narrow this down. Given: - You’re on Prefect 3.4.22 in ECS using the
prefect-3:latest
image - Flow events appear in Cloud but flow logs do not - CloudWatch shows the expected log lines (both
print
and
get_run_logger()
) This almost always points to the Prefect “logs to API” handler being disabled or overridden in the flow container, or to logs being dropped before they flush. Things to check first (quick wins) - Make sure these env vars are set in the JOB CONTAINER (not just the worker): -
PREFECT_LOGGING_TO_API_ENABLED=true
-
PREFECT_LOGGING_LOG_PRINTS=true
(since you want
print
lines too) -
PREFECT_LOGGING_LEVEL=INFO
(or
DEBUG
) - Verify the Cloud UI log filters (level, time range, “Show logs” toggles) aren’t hiding messages. - If your flow completes very quickly, logs may not flush before the process exits. For a test run, add a short sleep at the end or reduce the batch interval (see below). Where to set these for ECS runs - Preferred: in your ECS work pool’s base job template (so they apply to every task the worker launches) - Cloud UI: Work Pools -> your ECS pool -> Edit base job template -> Environment variables - CLI (to inspect current values):
Copy code
prefect work-pool inspect "<your-ecs-pool>"
Look for
job_variables
and the
env
section. - Alternatively: set env vars in your deployment’s job variables/infrastructure overrides so they propagate to the task. Common causes we see -
PREFECT_LOGGING_TO_API_ENABLED
accidentally set to
false
in the work pool’s base job template or deployment overrides. This will give you events (they use the events service) but no logs in Cloud (they use the logs endpoint). - A custom logging configuration overriding the default Prefect handlers - If you set
PREFECT_LOGGING_CONFIG_PATH
(or configured a custom logging dict), it can remove the Prefect API log handler. Try unsetting it or ensure the Prefect API log handler is included. - Logs dropped before flush - By default logs are batched and flushed on a short interval. If the container exits right away, some logs may not make it. - For testing, set: -
PREFECT_LOGGING_TO_API_BATCH_INTERVAL=0.5
(flush more frequently) - And/or add a
time.sleep(2)
at the end of the flow to confirm the flush occurs Useful diagnostics - Inspect the effective Prefect settings inside the flow container (you’ll see this in CloudWatch even if logs don’t reach Cloud):
Copy code
from prefect.settings import get_current_settings
  from prefect import flow, get_run_logger
  import time, os

  @flow
  def log_probe():
      s = get_current_settings()
      logger = get_run_logger()
      <http://logger.info|logger.info>(f"Log level: {s.logging.level}")
      <http://logger.info|logger.info>(f"Log prints: {s.logging.log_prints}")
      <http://logger.info|logger.info>(f"To API enabled: {s.logging.to_api.enabled}")
      <http://logger.info|logger.info>(f"Batch interval: {s.logging.to_api.batch_interval}")
      <http://logger.info|logger.info>(f"Batch size: {s.logging.to_api.batch_size}")
      <http://logger.info|logger.info>(f"Max log size: {s.logging.to_api.max_log_size}")
      print("Print that should appear in Cloud logs if log_prints=True")
      time.sleep(2)  # give the API handler time to flush

  if __name__ == "__main__":
      log_probe()
- Turn on verbose internal logging to see if logs are being sent/dropped: -
PREFECT_LOGGING_LEVEL=DEBUG
-
PREFECT_LOGGING_EXTRA_LOGGERS=prefect.client,prefect.logging
- Look in CloudWatch for warnings like “dropping log” or errors sending logs to the API. Known issues - We’re not tracking a 3.4.22-specific bug that causes flow logs to disappear from Cloud while events succeed. When this happens in ECS, it’s nearly always configuration (API logging disabled or overridden) or early container exit before the batch flush. If you want, share: - The output of
prefect work-pool inspect "<your-ecs-pool>"
(specifically the
env
in
job_variables
) - Whether you set a custom
PREFECT_LOGGING_CONFIG_PATH
- A sample flow run ID where this occurred References - Logging customization (Prefect 3) - Settings reference (see
PREFECT_LOGGING_*
and
PREFECT_LOGGING_TO_API_*
) - ECS worker guide
j
I located 422 errors in Cloudwatch for strings too long for the API.