<@ULVA73B9P> provide examples on how to use one of...
# ask-marvin
c
@Marvin provide examples on how to use one of your own tooling utilities, namingly
tool_from_model
found in module
marvin.utilities.tools
m
I couldn't find any specific examples or documentation related to the
tool_from_model
utility in the
marvin.utilities.tools
module on GitHub or in the Prefect-related repositories. It might not be part of publicly available Prefect documentation or issues. To get more information, you might want to check the source code directly if you have access to it, or contact the maintainers if this is part of a private or less-documented library. If you have more context or specifics about where this utility is used, I can try another approach or offer guidance based on general software development practices.
n
😄 marvin doesn't have its own source code in the vectorstore, just prefect stuff
c
F
@Nate since you're here (thank you for replying), do you happen to know if it expects an openai json schematized model?
n
if what does?
functions you give to assistants as tools need to be represented as openapi schemas eventually, if thats what you're asking
c
Yes that's what I was asking - didn't know OpenAI itself followed the OpenAPI schema 😄
Are functions ran in OpenAI's engine itself? In before I spend more time learning Marvin, I'm trying to figure if we can build custom tools and run them in our own sandbox environment (which could very likely be local, i.e using
subprocesss.run
)
n
Are functions ran in OpenAI's engine itself?
nope, its basically like marvin lib: hey openai, a user just asked what the weather is in chicago and i have a
get_weather
tool that accepts a
city
param, here's the openapi schema - how should I call the function? openai: cool yeah i'd say you should call
get_weather("chicago")
marvin lib: ok the result of
get_weather("chicago")
is "sunny and 60 degrees" openai: great, producing the final chat message now based on that tool result
so we run the tools you give to assistants client side, and tell openai what we see
c
Basically only the function signature is passed
n
yep, represented as an openapi schema
c
great, gonna experiment creating a shell tool and see how that goes - sharing a thought further on this, I recall not all models support engine-driven function calling (though that's rapidly evolving and more models do). I want to assume imperative function calling implementations isn't and won't be on Marvin roadmap given its simplistic nature, hence leaving such "DIY" use cases to users' own implementations
I'll be happy with a thumbs up to confirm, don't wanna take more time away from you in writing an extensive answer 🤜🤛
n
candidly, there is actually nothing on marvin's roadmap, i am passively maintaining it controlflow is the spiritual successor to the marvin experiment and has broader LLM provider support, if you're going to dive into something, I would pick controlflow marvin is great if you want native types from unstructured inputs without a lot of overhead, but for the full blown "make your own agent with tools" I would recommend controlflow. you can still do structured outputs in controlflow as well
Copy code
In [1]: import controlflow as cf

In [2]: cf.run("i ate 3 bagels with about 16 oz of cream cheese", result_type=list[int])

Out[2]: [3, 16]
👀 1
😮 1
and fwiw im not familiar with any LLM providers that run arbitrary user code server side, I think most frameworks will run user code (that assistants use as tools) client side
the only exceptions i guess are like
code-interpreter
and baked in tools that I think openai still offers
c
and fwiw im not familiar with any LLM providers that run arbitrary user code server side, I think most frameworks will run user code (that assistants use as tools) client side
That is correct, it's more of a sugar coat whereas instead of having OpenAI engine "invoking" your function, you imperatively invoke it yourself, with a prompt like:
Copy code
These are your choices:
{{ *functions }}
Which one should be invoked, based on {{ context }}
...and the client goes by to invoke the function manually. No schemas involved. That's the DYI I was referring to
n
yeah I think this was just how you wrote system prompts before "function calling" existed (which is what we were doing in
marvin
in early 2023), but now frameworks like controlflow or langchain just let you pass the native python function and abstract away the fact that the API gets an openapi schema, which the model is now fine-tuned by the LLM provider to respect
c
I was hesitant on trying out controlflow because it was not clear how it differs or integrates with prefect, which is the core of my backend
n
controlflow is itself backed by prefect. so if you run some controlflow example, and then do
prefect dashboard open
you'll see the work happening on your server / workspace