<@ULVA73B9P> I've deployed a langgraph app to a ma...
# marvin-ai
a
@Marvin I've deployed a langgraph app to a managed deployment in Prefect Cloud. Except for the flow entrypoint (in which I've used get_run_logger()), none of my debug messages are showing in the run log. My business logic is in nodes orchestrated by a langgraph graph, so they're not defined as flows or tasks.
m
thought for 299.6 seconds
You're running into Prefect’s logging boundaries. In Prefect 3.x, only logs emitted through Prefect’s run logger (via
get_run_logger()
) and any explicitly-configured stdlib loggers are guaranteed to land in the flow run logs. Arbitrary code (like langgraph nodes that aren’t flows/tasks) won’t be captured unless you wire it up. Quick ways to get your node logs into Prefect: 1) Use Prefect’s run logger inside your nodes - You can call
prefect.get_run_logger()
from any code that executes during an active flow run — it does not need to be a task.
Copy code
from prefect import get_run_logger

def my_node(...):
    logger = get_run_logger()
    logger.debug("debug from langgraph node")
    ...
2) Capture print() output - EITHER set
log_prints=True
on your flow decorator:
Copy code
from prefect import flow

@flow(log_prints=True)
def entrypoint(...):
    ...
- OR patch
print()
at the start of your entrypoint:
Copy code
from prefect.logging.loggers import patch_print

@flow
def entrypoint(...):
    patch_print()
    ...
3) Capture stdlib logging from libraries (langgraph, langchain, openai, httpx, etc.) - Set these environment variables on your managed deployment (Cloud UI > Deployment > Settings > Environment Variables):
Copy code
PREFECT_LOGGING_ROOT_LEVEL=DEBUG
PREFECT_LOGGING_EXTRA_LOGGERS=langgraph,langchain,openai,httpx
- Optionally, ensure those libraries are actually emitting DEBUG by setting their levels in code:
Copy code
import logging
logging.getLogger("langgraph").setLevel(logging.DEBUG)
logging.getLogger("langchain").setLevel(logging.DEBUG)
Notes - Managed vs. local doesn’t change logging semantics; the key is environment and how logs are emitted. Set the env vars on the deployment and/or update code as above. - If you rely heavily on
print()
, prefer
@flow(log_prints=True)
or
patch_print()
to ensure those messages show in the run log. Docs for deeper detail: - Add logging to flows - Advanced logging customization - Logging API (get_run_logger, patch_print) If you can share how your nodes are currently logging (e.g.,
print
,
logging.getLogger(__name__)
, or something else) and which logger names you need, I can suggest the exact env var values and minimal code changes.
a
@Marvin I'm using logging.getlogger(name) in the nodes. I did try changing this to get_run_logger() as you suggest above, but this resulted in runtime errors complaining that get_run_logger() can only run during a flow or task, so I assume that something about my graph flow is breaking the context.
m
thought for 70.6 seconds
Thanks — that error tells us your langgraph nodes are executing outside Prefect’s run context (e.g., in separate threads/loops/subprocesses), so
get_run_logger()
isn’t available there. You don’t have to switch your node logging to Prefect’s logger. The most reliable path is to capture your existing stdlib logging into the Prefect run logs. Do this: 1) Register your node/library loggers with Prefect - Add these env vars on the managed deployment (Cloud UI > Deployment > Settings > Environment variables):
Copy code
PREFECT_LOGGING_ROOT_LEVEL=DEBUG
PREFECT_LOGGING_EXTRA_LOGGERS=your_top_level_package,langgraph
- Replace
your_top_level_package
with the actual top-level name you use (the part before
.__name__
, e.g.,
myapp
). 2) Make sure those loggers emit DEBUG
Copy code
import logging

logging.getLogger("your_top_level_package").setLevel(logging.DEBUG)
logging.getLogger("langgraph").setLevel(logging.DEBUG)
Tip: If you call
logging.basicConfig(...)
or attach your own handlers elsewhere, ensure your loggers still propagate (
logger.propagate = True
) so Prefect’s handler can forward them. Helpful alternatives if you want more control: - Dependency-inject the run logger once at the top of your flow and use it in nodes (avoids context lookups inside threads):
Copy code
from prefect import flow, get_run_logger

@flow
def entrypoint(...):
    run_logger = get_run_logger()
    graph.invoke(inputs={"logger": run_logger})  # or however you pass context
Then in your nodes:
Copy code
def my_node(logger, ...):
    logger.debug("debug from node")
- Eavesdrop on specific logger hierarchies from inside the flow This can forward logs (even from other threads) to the current run logger:
Copy code
import logging
from prefect import flow
from prefect.logging.loggers import LogEavesdropper

@flow
def entrypoint(...):
    with LogEavesdropper("your_top_level_package", level=logging.DEBUG), \
         LogEavesdropper("langgraph", level=logging.DEBUG):
        graph.invoke(...)
- If you also use print() Add one of:
Copy code
from prefect import flow

@flow(log_prints=True)
def entrypoint(...):
    graph.invoke(...)
or
Copy code
from prefect.logging.loggers import patch_print

@flow
def entrypoint(...):
    patch_print()
    graph.invoke(...)
Docs: - Add logging to flows - Advanced logging customization - Logging API (get_run_logger, LogEavesdropper, patch_print) If you can share: - the exact logger names you’re using (e.g.,
myapp.nodes
), and - whether your graph runs nodes in threads or subprocesses …I can suggest the precise
PREFECT_LOGGING_EXTRA_LOGGERS
value and confirm whether
LogEavesdropper
or dependency injection will be a better fit.