<@ULVA73B9P> I updated to lastest version of prefe...
# ask-marvin
e
@Marvin I updated to lastest version of prefect but when I call a subflow in a flow on my prod EC2 instance I get : Encountered exception during execution: Traceback (most recent call last): File "/usr/local/lib/python3.10/site-packages/prefect/engine.py", line 861, in orchestrate_flow_run File "/usr/local/lib/python3.10/site-packages/prefect/_internal/concurrency/calls.py", line 293, in aresult Set the timeout for the call. File "/usr/local/lib/python3.10/site-packages/prefect/_internal/concurrency/calls.py", line 318, in _run_sync if not self.future.set_running_or_notify_cancel(self.timeout): File "/data-stack-main/prefect/lendfusion_to_s3.py", line 423, in lendfusion_to_s3 read_and_upload_csv_files_as_json( File "/usr/local/lib/python3.10/site-packages/prefect/flows.py", line 1129, in call ) File "/usr/local/lib/python3.10/site-packages/prefect/engine.py", line 234, in enter_flow_run_engine_from_flow_call File "/usr/local/lib/python3.10/site-packages/prefect/logging/configuration.py", line 75, in setup_logging File "/usr/local/lib/python3.10/site-packages/prefect/logging/configuration.py", line 39, in load_logging_config with warnings.catch_warnings(): File "/usr/local/lib/python3.10/string.py", line 121, in substitute return self.pattern.sub(convert, self.template) File "/usr/local/lib/python3.10/string.py", line 114, in convert return str(mapping[named]) KeyError: 'PREFECT_SERVER_LOGGING_LEVEL
m
thought for 35.6 seconds
Based on the research and the GitHub issue, I can help you resolve this error. The issue is related to a logging configuration setting, and there are two ways to fix it: 1. Set the environment variable:
Copy code
bash
export PREFECT_SERVER_LOGGING_LEVEL=WARNING  # or INFO, ERROR, etc.
2. Update your logging configuration: If you have a custom
logging.yml
file (typically in
~/.prefect/logging.yml
), you need to update it to use the correct variable name. Change:
Copy code
yaml
loggers:
  prefect:
    level: "${PREFECT_LOGGING_SERVER_LEVEL}"
to:
Copy code
yaml
loggers:
  prefect:
    level: "${PREFECT_SERVER_LOGGING_LEVEL}"
The error occurs because there was a change in the environment variable naming convention. The correct variable name is
PREFECT_SERVER_LOGGING_LEVEL
(not
PREFECT_LOGGING_SERVER_LEVEL
). Common logging levels you can use are: - INFO - WARNING - ERROR - DEBUG For production environments, WARNING or ERROR are typically recommended. Would you like me to help you with any specific logging level configuration or do you need clarification on any part of the solution?
e
@Marvin Issue is its set in the docker file: FROM --platform=linux/amd64 prefecthq/prefect:3.4.11-python3.10 # Set Prefect logging levels globally ENV PREFECT_LOGGING_LEVEL=INFO \ PREFECT_CLIENT_LOGGING_LEVEL=INFO \ PREFECT_SERVER_LOGGING_LEVEL=INFO \ PREFECT_LOGGING_SETTINGS_PATH=/usr/flows/logging.yml # Copy Python dependencies COPY requirements.txt /tmp/requirements.txt # Copy logging configuration COPY logging.yml /usr/flows/logging.yml # Install them RUN python -m pip install --no-cache-dir --upgrade pip \ && python -m pip install --no-cache-dir -r /tmp/requirements.txt \ && rm /tmp/requirements.txt # Default workdir for exec / VS Code WORKDIR /usr/flows And in the env and it still does not work> Thinks its an issue with docker containers for subflows not having the env set
m
thought for 49.7 seconds
Based on your Dockerfile and the research, I can see a couple of potential solutions to this issue: 1. Pass Environment Variables to Subflow Container: If your subflows are running in separate containers, you need to ensure the environment variables are passed to those containers as well. You can modify your flow deployment to include these variables:
Copy code
python
flow.deploy(
    name="your-deployment",
    job_variables={
        "env": {
            "PREFECT_LOGGING_LEVEL": "INFO",
            "PREFECT_CLIENT_LOGGING_LEVEL": "INFO",
            "PREFECT_SERVER_LOGGING_LEVEL": "INFO",
            "PREFECT_LOGGING_SETTINGS_PATH": "/usr/flows/logging.yml"
        }
    }
)
2. Use a Custom Logger in Your Flows: A more robust solution would be to use Prefect's built-in logging utilities in your flows:
Copy code
python
from prefect import flow, get_run_logger

@flow
def your_flow():
    logger = get_run_logger()
    <http://logger.info|logger.info>("This will be properly logged")
    # Your flow logic here

@flow
def your_subflow():
    logger = get_run_logger()
    <http://logger.info|logger.info>("This will be properly logged in the subflow")
    # Your subflow logic here
3. Modify Your logging.yml: Make sure your logging.yml file has a default fallback if the environment variable isn't set:
Copy code
yaml
loggers:
  prefect:
    level: ${PREFECT_SERVER_LOGGING_LEVEL:-INFO}  # This provides a default of INFO if the var isn't set
    handlers: [console]
    propagate: false
4. Ensure Volume Mounting: Make sure your logging.yml file is properly mounted in the subflow containers:
Copy code
dockerfile
VOLUME /usr/flows
Could you share: 1. How are you calling the subflow? (Is it in the same flow file or imported?) 2. Are you using any specific deployment infrastructure for the subflows? This would help me provide a more targeted solution for your specific setup.
e
@Marvin 1. same file but task imported 2. no it defaults I think
m
thought for 57.4 seconds
Based on the research and your setup, I have a few suggestions to resolve this issue: 1. First, verify the environment variables are actually set in your container: Add this debugging code at the start of your flow:
Copy code
python
import os
from prefect import flow, task, get_run_logger

@flow
def debug_flow():
    logger = get_run_logger()
    <http://logger.info|logger.info>("Checking environment variables:")
    for key in os.environ:
        if key.startswith('PREFECT_'):
            <http://logger.info|logger.info>(f"{key}: {os.environ[key]}")

@flow
def main_flow():
    debug_flow()
    # rest of your flow
2. Modify your logging.yml to use fallbacks:
Copy code
yaml
version: 1
disable_existing_loggers: False

formatters:
  standard:
    (): prefect.logging.formatters.PrefectFormatter
    format: "%(asctime)s.%(msecs)03d | %(levelname)-7s | %(name)s - %(message)s"

handlers:
  console:
    class: prefect.logging.handlers.PrefectConsoleHandler
    formatter: standard
    level: ${PREFECT_LOGGING_LEVEL:-INFO}

loggers:
  prefect:
    level: ${PREFECT_SERVER_LOGGING_LEVEL:-INFO}
    handlers: [console]
    propagate: false

root:
  level: ${PREFECT_LOGGING_LEVEL:-INFO}
  handlers: [console]
3. Ensure environment variables are set in your Docker run command:
Copy code
bash
docker run \
  -e PREFECT_LOGGING_LEVEL=INFO \
  -e PREFECT_CLIENT_LOGGING_LEVEL=INFO \
  -e PREFECT_SERVER_LOGGING_LEVEL=INFO \
  -e PREFECT_LOGGING_SETTINGS_PATH=/usr/flows/logging.yml \
  your-image-name
4. Use explicit logging in your tasks:
Copy code
python
from prefect import task, get_run_logger

@task
def your_task():
    logger = get_run_logger()
    <http://logger.info|logger.info>("Starting task execution")
    # your task logic here
Could you try these steps and let me know: 1. What does the debug_flow output show? 2. Are you running this in a local Docker container or in a cloud environment? 3. Could you share a bit more about how the task is imported? (Is it from another file in the same directory, or from a package?) This information would help me provide a more targeted solution for your specific setup.