Tom Han
08/28/2025, 4:14 AMMarvin
08/28/2025, 4:14 AMMarvin
08/28/2025, 4:14 AMTom Han
08/28/2025, 4:15 AMTom Han
08/28/2025, 4:15 AMMarvin
08/28/2025, 4:15 AMMarvin
08/28/2025, 4:15 AMTom Han
08/28/2025, 4:17 AMMarvin
08/28/2025, 4:17 AMMarvin
08/28/2025, 4:18 AM# pip install google-generativeai requests
import os, requests, json
import google.generativeai as genai
GENAI_API_KEY = os.environ["GENAI_API_KEY"]
GITHUB_TOKEN = os.environ["GITHUB_TOKEN"]
genai.configure(api_key=GENAI_API_KEY)
model = genai.GenerativeModel(
model_name="gemini-2.5-flash",
tools=[{
"name": "search_github_issues",
"description": "Search GitHub issues or discussions",
"input_schema": {
"type": "object",
"properties": {
"repo": {"type": "string", "description": "owner/repo"},
"query": {"type": "string", "description": "search terms"}
},
"required": ["repo", "query"]
},
}]
)
def search_github_issues(repo, query, n=5):
headers = {"Authorization": f"Bearer {GITHUB_TOKEN}",
"Accept": "application/vnd.github+json"}
# Search both issues and discussions via issues endpoint (discussions appear if enabled)
url = "<https://api.github.com/search/issues>"
q = f"repo:{repo} {query}"
r = requests.get(url, headers=headers, params={"q": q, "per_page": n})
r.raise_for_status()
items = r.json().get("items", [])
return [
{
"title": it["title"],
"url": it["html_url"],
"state": it.get("state"),
"snippet": (it.get("body") or "")[:400]
} for it in items
]
def chat_once(user_message):
response = model.generate_content(
contents=[{"role":"user","parts":[user_message]}],
tools=[{"name":"search_github_issues"}],
)
# Handle tool calls (function calling)
if response.candidates and response.candidates[0].content.parts:
for part in response.candidates[0].content.parts:
if getattr(part, "function_call", None):
fc = part.function_call
if fc.name == "search_github_issues":
args = json.loads(fc.args)
results = search_github_issues(**args)
# Send tool result back to the model to get a final grounded answer
response = model.generate_content(
[
{"role":"user","parts":[user_message]},
{"role":"tool","parts":[json.dumps({"tool":"search_github_issues","args":args,"results":results})]}
]
)
break
return response.text
if __name__ == "__main__":
print(chat_once("Find open issues about retries in prefecthq/prefect"))
Optional: wrap with Prefect for reliability and observability
```
# pip install prefect google-generativeai requests
from prefect import flow, task
@task(retries=2, retry_delay_seconds=5)
def ask(question: str) -> str:
return chat_once(question)
@flowMarvin
08/28/2025, 4:18 AMassistant_flow
.
- Post replies back to the channel/thread. Prefect gives you logs, retries, and metrics.
RAG indexing (optional)
- Embed files from your repo/docs, store in a vector DB (pgvector, Pinecone, etc.).
- Add a retrieve(query)
tool that returns top-k chunks; pass them back to the model for final synthesis.
If you share your target sources and runtime (cloud vs on‑prem) I can tailor a starter template and deployment plan, including secrets, rate limits, and cost controls.Nate
08/28/2025, 2:57 PMcan I adapt u for self use with a different repo/different base model like gemini-2.5-flash?yea its built on pydantic-ai. there's definitely lots of idiosyncratic-to-prefect stuff here but if you asked claude code or similar to clone prefecthq/marvin and generalize the design of examples/slackbot, itd probably do a good job for you