<@ULVA73B9P> Marvin makes sense for building workf...
# marvin-ai
d
@Marvin Marvin makes sense for building workflow based ai applications, can you give me an example of building an ai based application using something like fast api?
m
thought for 174.2 seconds
Absolutely—here’s a minimal, production-friendly example of a FastAPI service that uses Marvin to power AI endpoints like chat, summarization, and structured extraction. Setup - Python 3.10+ - Install deps:
Copy code
pip install "marvin>=3" fastapi uvicorn pydantic
- Set your LLM API key (for example):
Copy code
export OPENAI_API_KEY=sk-...
FastAPI app (marvin + endpoints)
Copy code
# app.py
from fastapi import FastAPI
from pydantic import BaseModel
from marvin import Agent, run_async, extract_async, summarize_async

app = FastAPI(title="Marvin + FastAPI")

# A simple assistant that persists context per session/thread
agent = Agent(
    name="Support Bot",
    instructions="You are a concise, helpful assistant. Use bullet points when appropriate."
)

# ----- Models -----

class ChatRequest(BaseModel):
    message: str
    session_id: str | None = None  # use to persist conversation state

class ChatResponse(BaseModel):
    reply: str
    session_id: str

class ExtractRequest(BaseModel):
    text: str

class Product(BaseModel):
    name: str
    price: float | None = None
    sku: str | None = None

class SummarizeRequest(BaseModel):
    text: str
    style: str | None = None

class SummarizeResponse(BaseModel):
    summary: str

# ----- Endpoints -----

@app.post("/chat", response_model=ChatResponse)
async def chat(req: ChatRequest):
    # Persist agent memory by reusing a consistent thread ID per user/session
    thread_id = req.session_id or "default-session"
    reply = await run_async(req.message, agent=agent, thread=thread_id)
    return ChatResponse(reply=reply, session_id=thread_id)

@app.post("/extract_product", response_model=Product)
async def extract_product(req: ExtractRequest):
    # Structured extraction: turn messy text into a typed object
    product = await extract_async(req.text, target=Product)
    return product

@app.post("/summarize", response_model=SummarizeResponse)
async def summarize(req: SummarizeRequest):
    instructions = f"Summarize in a {req.style} style." if req.style else None
    summary = await summarize_async(req.text, instructions=instructions)
    return SummarizeResponse(summary=summary)
Run it
Copy code
uvicorn app:app --reload
Try it - Chat with memory (reusing session_id):
Copy code
curl -X POST <http://127.0.0.1:8000/chat> \
  -H "Content-Type: application/json" \
  -d '{"message": "Hi, who are you?", "session_id": "user-123"}'
- Structured extraction:
Copy code
curl -X POST <http://127.0.0.1:8000/extract_product> \
  -H "Content-Type: application/json" \
  -d '{"text": "We sell the Acme Rocket for $19.99, SKU AR-42."}'
- Summarization:
Copy code
curl -X POST <http://127.0.0.1:8000/summarize> \
  -H "Content-Type: application/json" \
  -d '{"text": "Long content here...", "style": "bullet point"}'
Notes and next steps - Persist conversation: In production, pass a stable session_id from your app (user ID, JWT sub, etc.) so Marvin can retain context for that user via the
thread
parameter. - Validation: Pydantic models (like
Product
) give you strong typing and automatic JSON validation for AI outputs. - Streaming: For typing/streaming UIs, you can upgrade the chat endpoint to use
run_stream
with FastAPI’s StreamingResponse. - Observability: If you’re already using Prefect 3 for orchestration, you can call flows/tasks inside handlers or after extractions to trigger downstream work. - Source: Marvin on GitHub If you have a specific task in mind (RAG, tools, function-calling, multi-agent routing), I can tailor this example to your use case.