ControlFlow provides examples of using multiple large language models (LLMs) within a single workflow. This approach is useful for leveraging different models for specific tasks, optimizing for both speed and quality. Here's a brief overview:
1.
Multiple LLM Models: For efficient classification tasks, the example uses GPT-4o-mini models, while GPT-4o is used for more complex synthesis tasks like generating comprehensive summaries.
2.
Specialized Agents: Separate agents are created for different tasks, each utilizing its own LLM model. This setup ensures that the workflow is both efficient and high-quality.
3.
Structured Data: Pydantic models are used to ensure type safety and consistent data structures throughout the workflow.
4.
Task-Specific Result Types: Each task has a specific result type that aligns with the expected output, ensuring correct information is provided by the agents.
For more details, you can explore the
Multi-LLM Workflows example.
Additionally, ControlFlow supports any LangChain LLM that supports chat and function calling, and it uses provider-specific rules to compile messages properly for each agent's API. More information can be found in the
ControlFlow LLMs guide.
No relevant GitHub issues were found, which suggests that the examples and documentation should be quite robust for your needs.