datamongus
08/29/2025, 5:16 AMMarvin
08/29/2025, 5:16 AMMarvin
08/29/2025, 5:17 AMdatamongus
08/29/2025, 6:42 AMMarvin
08/29/2025, 6:42 AMMarvin
08/29/2025, 6:46 AMpip install marvin langfuse openai
3) Configure environment variables
- For Langfuse Cloud:
export LANGFUSE_PUBLIC_KEY=your_public_key
export LANGFUSE_SECRET_KEY=your_secret_key
export LANGFUSE_HOST=<https://cloud.langfuse.com>
- For self-hosted, set LANGFUSE_HOST
to your instance URL.
4) Auto-instrument OpenAI (captures Marvin’s model calls)
Add this as early as possible in your app startup—before any Marvin calls:
from langfuse.openai import register_tracing
# Patches the OpenAI client so all OpenAI calls are traced to Langfuse
register_tracing()
5) Optional: wrap your Marvin entrypoint with a Langfuse trace
This creates a clean top-level trace and will include nested spans from the OpenAI auto-instrumentation.
from langfuse import Langfuse
from marvin import generate # or Agent, say, run, etc.
langfuse = Langfuse()
@langfuse.observe(name="marvin.generate-demo")
def run_marvin():
return generate("Write a short haiku about Langfuse + Marvin.")
if __name__ == "__main__":
print(run_marvin())
That’s it. Now when Marvin uses OpenAI under the hood, those requests/responses, latencies, and costs will appear in Langfuse. The decorator gives you a top-level trace around the Marvin call; you can add more @langfuse.observe(name="...")
on other functions if you want additional spans.
Optional: correlate with Prefect runs (if you’re orchestrating with Prefect 3.x)
- Pull the Prefect run ID and add it to your observation name so you can search/link in Langfuse:
from prefect.runtime import flow_run
from langfuse import Langfuse
langfuse = Langfuse()
@langfuse.observe(name=f"marvin.flow_run:{flow_run.id()}")
def run_marvin_job():
...
- For structured correlation (e.g., metadata/tags on the trace), see the Langfuse docs for adding metadata to observations: Docs.
Notes and tips
- register_tracing()
has no parameters and instruments the official OpenAI SDK. If you’re using non-OpenAI providers in Marvin, you’ll still get the decorator-level trace, but not auto-captured LLM details—add more @observe
or consult the Langfuse docs for manual spans/events.
- Place register_tracing()
as early as possible (before any OpenAI client instantiation).
- If you run in containers/workers, ensure the LANGFUSE_* env vars are set in that environment too.
Want me to tailor the example to your specific Marvin usage (e.g., Agent
, tools, streaming), or to your deployment setup (Prefect deployments, Docker, K8s)?