<@ULVA73B9P> I'm running Prefect server + agent lo...
# marvin-in-the-wild
a
@Marvin I'm running Prefect server + agent locally and also in my Dev environment in AWS, and I'm having an issue where I'm seeing flow and task run logs locally in the Prefect UI, but I'm not able to see anything in the AWS environment. The flow and its tasks are running successfully in Dev, but logs aren't being displayed in the UI. I'm using
get_run_logger()
to instantiate the logger both at the flow and task levels and calling
<http://logger.info|logger.info>()
. My minimum log level setting is INFO. I've verified the config and settings in both envs and ensured that the agent container can communicate with the server and the DB where logs are stored. Clicking on a task in Dev simply displays "This run didn't generate Logs " . The only log I have in my
public.log
table in my Postgres db is "Downloading flow code from storage at '/app'". The server and agent AWS logs in Dev don't contain any exceptions or warnings. Here's a snapshot of my logging variables:
Copy code
PREFECT_LOGGING_EXTRA_LOGGERS='test-logger' (from profile)
PREFECT_LOGGING_INTERNAL_LEVEL='DEBUG' (from profile)
PREFECT_LOGGING_LEVEL='DEBUG' (from profile)
PREFECT_LOGGING_LOG_PRINTS='True' (from profile)
PREFECT_LOGGING_SERVER_LEVEL='DEBUG' (from profile)
m
Here are some steps and suggestions you can try to troubleshoot the logging issue in your AWS environment: 1. Check Logger Configuration: - Review the logging configuration on your Prefect server to ensure it's correctly set up to capture and display logs. Verify the settings in your
logging.yml
file if you're using a custom one. - You can find guidance on customizing logging configuration in the Prefect documentation. 2. Environment Variables: - Confirm that the environment variables are set correctly in your AWS environment. These include
PREFECT_LOGGING_EXTRA_LOGGERS
,
PREFECT_LOGGING_INTERNAL_LEVEL
,
PREFECT_LOGGING_LEVEL
,
PREFECT_LOGGING_LOG_PRINTS
, and
PREFECT_LOGGING_SERVER_LEVEL
. 3. Network and DB Connectivity: - Ensure that the agent container in AWS has proper network access to communicate with the Prefect server and the PostgreSQL database where logs are stored. 4. Server and Agent Logs: - Inspect the server and agent logs in AWS to ensure there are no errors or warnings that might affect logging. 5. Debug Settings: - Enable debug logging to capture more detailed logs, which might help in identifying the root cause. You can enable it by setting
PREFECT_LOGGING_LEVEL
to
'DEBUG'
. 6. Verify Logger Usage: - Double-check the use of
get_run_logger()
in your code to ensure it is being called correctly in both flow and task levels. 7. Database Table: - Confirm that the
public.log
table in your Postgres database is being updated correctly from the AWS environment. Here’s a link to Prefect’s logging guide in the documentation: * https://docs.prefect.io/latest/guides/logs/ For more detailed troubleshooting steps and community discussions, you can also explore these active GitHub issues: 1. Issue with Prefect UI Logs: * Link to Issue 2. Flow Run Stuck in Submitted State: * Link to Relevant Discussion These steps should help you identify the problem. If none of the above solutions work, consider reaching out to the Prefect community or support with detailed information about your setup and the troubleshooting steps you have already taken.
a
@Marvin I've tried all of that. I've also tried using a custom logger a la
logging.getLogger()
along with setting
PREFECT_LOGGING_EXTRA_LOGGERS
, and I've tried print statements along with setting
PREFECT_LOGGING_LOG_PRINTS
, but neither causes anything to show up in the logs. Here's an example of my a test flow that is running, but is not generating any logs:
Copy code
def get_logger():
    logger = logging.getLogger("test-logger")
    logger.setLevel(logging.INFO)
    console_handler = logging.StreamHandler()
    console_handler.setLevel(logging.INFO)
    logger.addHandler(console_handler)
    return logger


@task
def extract_data():
    logger = get_logger()
    # Extract data from a source
    response = requests.get("<https://news.ycombinator.com>")
    if response.status_code == 200:
        logger.info(response.text)
        print(response.text)
        return response.text
    else:
        logger.info(f"Failed to retrieve data, status code: {response.status_code}")
        print(f"Failed to retrieve data, status code: {response.status_code}")

    return ""


@flow(
    flow_run_name="{schema_name}-on-{date:%A}",
    description="Extracts data from HN",
    task_runner=SequentialTaskRunner(),
)
def test_flow(schema_name: str, date: datetime.datetime):
    logger = get_logger()

    logger.info(f"logger: Running flow for schema: {schema_name}")
    print(f"print: Running flow for schema: {schema_name}")
    data = extract_data()
    logger.info("logger: Flow completed")
    print("print: Flow completed")
m
Your message was too long by 47 tokens - please shorten it and try again. For reference, here's your message at the allowed limit:
I've tried all of that. I've also tried using a custom logger a la
logging.getLogger()
along with setting
PREFECT_LOGGING_EXTRA_LOGGERS
, and I've tried print statements along with setting
PREFECT_LOGGING_LOG_PRINTS
, but neither causes anything to show up in the logs.
Here's an example of my a test flow that is running, but is not generating any logs: ```def get_logger(): logger = logging.getLogger("test-logger") logger.setLevel(logging.INFO) console_handler = logging.StreamHandler() console_handler.setLevel(logging.INFO) logger.addHandler(console_handler) return logger @task def extract_data(): logger = get_logger() # Extract data from a source response = requests.get("https://news.ycombinator.com") if response.status_code == 200: logger.info(response.text) print(response.text) return response.text else: logger.info(f"Failed to retrieve data, status code: {response.status_code}") print(f"Failed to retrieve data, status code: {response.status_code}") return "" @flow( flow_run_name="{schema_name}-on-{date:%A}", description="Extracts data from HN", task_runner=SequentialTaskRunner(), ) def test_flow(schema_name: str, date: datetime.datetime): logger = get_logger() logger.info(f"logger: Running
a
@Marvin I've tried all of that. I've also tried using a custom logger a la
logging.getLogger()
along with setting
PREFECT_LOGGING_EXTRA_LOGGERS
, and I've tried print statements along with setting
PREFECT_LOGGING_LOG_PRINTS
, but neither causes anything to show up in the logs. Here's an example of my a test flow that is running, but is not generating any logs:
Copy code
def get_logger():
    logger = logging.getLogger("test-logger")
    logger.setLevel(<http://logging.INFO|logging.INFO>)
    console_handler = logging.StreamHandler()
    console_handler.setLevel(<http://logging.INFO|logging.INFO>)
    logger.addHandler(console_handler)
    return logger


@task
def extract_data():
    logger = get_logger()
    # Extract data from a source
    response = requests.get("<https://news.ycombinator.com>")
    if response.status_code == 200:
        <http://logger.info|logger.info>(response.text)
        print(response.text)
        return response.text
    else:
        <http://logger.info|logger.info>(f"Failed, status code: {response.status_code}")
        print(f"Failed, status code: {response.status_code}")

    return ""


@flow(
    task_runner=SequentialTaskRunner(),
)
def test_flow():
    logger = get_logger()

    <http://logger.info|logger.info>(f"logger: Running flow")
    print(f"print: Running flow")
    data = extract_data()
    <http://logger.info|logger.info>("logger: Flow completed")
    print("print: Flow completed")
m
Your message was too long by 35 tokens - please shorten it and try again. For reference, here's your message at the allowed limit:
I've tried all of that. I've also tried using a custom logger a la
logging.getLogger()
along with setting
PREFECT_LOGGING_EXTRA_LOGGERS
, and I've tried print statements along with setting
PREFECT_LOGGING_LOG_PRINTS
, but neither causes anything to show up in the logs.
Here's an example of my a test flow that is running, but is not generating any logs: ```def get_logger(): logger = logging.getLogger("test-logger") logger.setLevel(logging.INFO) console_handler = logging.StreamHandler() console_handler.setLevel(logging.INFO) logger.addHandler(console_handler) return logger @task def extract_data(): logger = get_logger() # Extract data from a source response = requests.get("https://news.ycombinator.com") if response.status_code == 200: logger.info(response.text) print(response.text) return response.text else: logger.info(f"Failed, status code: {response.status_code}") print(f"Failed, status code: {response.status_code}") return "" @flow( task_runner=SequentialTaskRunner(), ) def test_flow(): logger = get_logger() logger.info(f"logger: Running flow") print(f"
a
@Marvin I've tried all of that. I've also tried using a custom logger a la
logging.getLogger()
along with setting
PREFECT_LOGGING_EXTRA_LOGGERS
, and I've tried print statements along with setting
PREFECT_LOGGING_LOG_PRINTS
, but neither causes anything to show up in the logs.
m
It looks like you've already tried several steps to diagnose your issue. Here are some advanced troubleshooting steps based on additional insights: 1. Ensure Logs Propagation: Verify that your loggers, including custom ones, have their propagation enabled so logs are sent to the root logger used by Prefect. You might need to explicitly set
propagate
to
True
for custom loggers. 2. Check Flow and Task Settings: Explicitly set
log_prints=True
at both the flow and task levels:
Copy code
python
   from prefect import flow, task, get_run_logger

   @task(log_prints=True)
   def my_task():
       print("This is a print statement")
       logger = get_run_logger()
       <http://logger.info|logger.info>("This is an info log")

   @flow(log_prints=True)
   def my_flow():
       my_task()

   my_flow()
3. Inspect Prefect Configuration: Confirm that all relevant Prefect environment variables are set prior to running the agent. Prefect might not pick up changes if they are set after the agent starts. 4. Database Inspection: Ensure that your Postgres database table
public.log
is being updated correctly from the AWS environment. Here is a SQL query to check:
Copy code
sql
   SELECT * FROM public.log ORDER BY timestamp DESC LIMIT 10;
5. Debug Logging: Enable
DEBUG
level logging for deeper insights:
Copy code
bash
   export PREFECT_LOGGING_LEVEL="DEBUG"
6. Alternative Storage: Consider using an alternative storage option such as S3 or Azure Blob Storage to persist and inspect logs outside the Prefect UI to rule out any UI-specific issues. You can find detailed logging configuration guidance here: * https://docs.prefect.io/latest/guides/logs/ If logs still do not appear, you might have encountered a bug. You can review similar cases and ongoing discussions here: 1. Issue with Prefect UI Logs: * https://github.com/PrefectHQ/prefect/issues/4505 2. Flow Run Stuck in Submitted State: * https://prefect-community.slack.com/archives/CL09KU1K7/p1612568555388000?thread_ts=1612568555.388000&amp;cid=CL09KU1K7 If the problem persists, seeking assistance from the Prefect community or official support with your detailed setup and troubleshooting steps might be needed.
a
Figured out the issue and wanted to follow up here just in case someone else runs into it in the future with a similar configuration. The flow I was testing requires the Django ORM, which I'm instantiating with the following before importing django models:
Copy code
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "config.settings.local")
django.setup()
This works fine locally since my
LOGGING
configuration in base.py has
disable_existing_loggers
set to False. However, my production.py config (which
DJANGO_SETTINGS_MODULE
is to set to in my deployed envs) had this set to True, and this was suppressing logs generated within Prefect flows. If anyone needs help with running the Django ORM in Prefect, feel free to DM me on Twitter (@ajaynayak). This isn't super well documented so I had to develop some of my own patterns.