Kimmo Hintikka
02/19/2024, 1:24 PMNate
02/19/2024, 7:34 PMNate
02/19/2024, 7:36 PMNate
02/19/2024, 7:46 PMSomeVectorstore.from_text
and then you run some retrieval chain against it? I think
marvin doesn't do RAG like that directly, as in, we're not trying to write wrappers around a bunch of tools like llama index / pinecone etc. we mostly just integrate closely with pydantic and openai. and we're more concerned with building tools that allow you to easily use an LLM to go from: "arbitrary input format" (e.g. str
, image, audio) -> native python/pydantic type, which we see as just "functional" prompt engineering, where you can use the outputs in normal code instead of making all your code about "ai stuff"
here are some examples you find interesting:
• use vision to process an insurance claim
• create a multi-modal summary of a github repo's daily activity
• the implementation for the slackbot that runs every day (that OpenAI is working) in #C04DZJC94DCKimmo Hintikka
02/21/2024, 9:52 AMNate
02/21/2024, 12:42 PMKimmo Hintikka
02/21/2024, 12:54 PMNate
02/21/2024, 2:33 PM