``` # Define a Pydantic model for categorization r...
# marvin-ai
a
Copy code
# Define a Pydantic model for categorization results
class CategorizationResult(BaseModel):
    categories: List[str]

model_name = 'qwen2.5-coder:14b'
model = ChatOllama(model=model_name, temperature=0)

# Create a summarization agent
summary_agent = cf.Agent(
    name="Video Transcript Summarization Agent",
    description="Expert in summarizing YouTube video transcripts.",
    model=model,
)

# Create a categorization agent
categorize_agent = cf.Agent(
    name="Topic Categorization Agent",
    description="Expert in categorizing topics from video descriptions.",
    model=model,
)

# Example task description for summarization
task_description = "Summarize the following video transcript: '...'"

# Run the summarization agent
 summary_result = cf.run(
            objective="Summarize the video to capture main insights",
            instructions=summary_prompt,
            agents=[summary_agent]
)

# Example categorization task
categorize_task = "Categorize the following summary into topics: '...'"

# Run the categorization agent with a Pydantic result type
categories_output = cf.run(
    objective="Categorize the video summary into predefined topics",
    instructions=categorize_task,
    result_type=CategorizationResult,
    agents=[categorize_agent]
)
print("Categories Result:", categories_output.categories)
i'm having issue with ollama models, i'm unable to get any intelligible results using ollama. I've tried: • llama3-groq-tool-use:8b • nemotron-mini:latest • nexusraven:latest • llama3.1:8b • qwen2.5-coder:14b And they all seem to miss the mark to summarize - i would be able to do this easily using just the ollama package, i'm not sure if there is something i'm missing, or if it's an issue with ollama models that the status of the task is being used as the result rather than the generated content. --- With Ollama:
Copy code
Processing video: AlphaProteo - Google DeepMind's Breakthrough AI for "Protein Design"
╭─ Agent: Video Transcript Summarization Agent 
│  ✅ Tool call: "mark_task_9f2c7734_successful"                                                   
│     Tool args: {'task_result': 'Summary generated successfully'}                                 
│     Tool result: Task #9f2c7734 ("Summarize the video to capture main insights") marked          
│     successful.                                                                                  
╰────────────────────────────────────────────────────────────────────────────────────  1:55:51
with openai/gpt-4o-mini:
Copy code
Processing video: AlphaProteo - Google DeepMind's Breakthrough AI for "Protein Design"
╭─ Agent: Video Transcript Summarization Agent 
│  ✅ Tool call: "mark_task_7d0275e3_successful"                                                                                                                                          
│     Tool args: {'task_result': "In this video, Wes Roth discusses Google DeepMind's              
│     groundbreaking AI model, AlphaProteo, which is capable of generating designer proteins...."}                                                                                                                                                    
│     Tool result: Task #7d0275e3 ("Summarize the video to capture main insights") marked          
│     successful.
👀 1
c
I think step number two would be extracting raw messages exchange with the API (LLM) and see what's actually being sent
j
In general we have seen really poor performance from many open source models, but I think it is because our prompt structure is optimized for OAI / Anthropic. Looking to change this soon
t
Meanwhile, would using Ollama LLM as a function calling tool be a workaround? What do you think?