Constantin Teo
10/23/2024, 5:50 PMtool_from_model
found in module marvin.utilities.tools
Marvin
10/23/2024, 5:50 PMtool_from_model
utility in the marvin.utilities.tools
module on GitHub or in the Prefect-related repositories. It might not be part of publicly available Prefect documentation or issues.
To get more information, you might want to check the source code directly if you have access to it, or contact the maintainers if this is part of a private or less-documented library. If you have more context or specifics about where this utility is used, I can try another approach or offer guidance based on general software development practices.Nate
10/23/2024, 5:50 PMConstantin Teo
10/23/2024, 5:51 PMConstantin Teo
10/23/2024, 5:51 PMNate
10/23/2024, 5:51 PMNate
10/23/2024, 5:52 PMConstantin Teo
10/23/2024, 5:53 PMConstantin Teo
10/23/2024, 5:54 PMsubprocesss.run
)Nate
10/23/2024, 5:58 PMAre functions ran in OpenAI's engine itself?nope, its basically like marvin lib: hey openai, a user just asked what the weather is in chicago and i have a
get_weather
tool that accepts a city
param, here's the openapi schema - how should I call the function?
openai: cool yeah i'd say you should call get_weather("chicago")
marvin lib: ok the result of get_weather("chicago")
is "sunny and 60 degrees"
openai: great, producing the final chat message now based on that tool resultNate
10/23/2024, 5:58 PMConstantin Teo
10/23/2024, 5:58 PMNate
10/23/2024, 5:59 PMConstantin Teo
10/23/2024, 6:09 PMConstantin Teo
10/23/2024, 6:11 PMNate
10/23/2024, 6:15 PMIn [1]: import controlflow as cf
In [2]: cf.run("i ate 3 bagels with about 16 oz of cream cheese", result_type=list[int])
Out[2]: [3, 16]
Nate
10/23/2024, 6:17 PMNate
10/23/2024, 6:18 PMcode-interpreter
and baked in tools that I think openai still offersConstantin Teo
10/23/2024, 6:19 PMand fwiw im not familiar with any LLM providers that run arbitrary user code server side, I think most frameworks will run user code (that assistants use as tools) client sideThat is correct, it's more of a sugar coat whereas instead of having OpenAI engine "invoking" your function, you imperatively invoke it yourself, with a prompt like:
These are your choices:
{{ *functions }}
Which one should be invoked, based on {{ context }}
...and the client goes by to invoke the function manually. No schemas involved. That's the DYI I was referring toNate
10/23/2024, 6:21 PMmarvin
in early 2023), but now frameworks like controlflow or langchain just let you pass the native python function and abstract away the fact that the API gets an openapi schema, which the model is now fine-tuned by the LLM provider to respectConstantin Teo
10/23/2024, 6:21 PMNate
10/23/2024, 6:22 PMprefect dashboard open
you'll see the work happening on your server / workspace