Hello, is there any documentation on ensuring that...
# marvin-ai
d
Hello, is there any documentation on ensuring that a task performs the exact same generated code each time ?
n
hi @datamongus - what do you mean by "generated code" here?
d
For example, how do I ensure get_weather will produce the same underlying logic each time it is called ?
Copy code
def get_weather(location: str) -> float:
    """Fetch weather data for the specified location."""
    # Implementation details...
Could just be my ignorance to how the doc strings work in cases where you don’t supply all of the code.
n
are you looking at an example from the docs? ahh yeah i think there might be some confusion. for a cf.task, all that matters is the return annotation. there’s no generated code. it’s just like marvin
Copy code
@marvin.fn
def sentiment(text: str) -> float:
    """
    Returns a sentiment score for `text`
    between -1 (negative) and 1 (positive).
    """
were just prompting the LLM to produce json that matches a json schema suggested by your return annotation
then we use pydantic to cast the resulting json into the float that you asked for (or whatever you did ask for)
“the llm is the runtime” there is no python in the body of the function
but if you’re asking about something else and i’m not making sense, please let me know and link it!
d
Thank you for that explanation. This makes sense.
I’m more or less trying to understand if there is a way or best practice to force the underlying llm to perform the exact same logic each time. For example, is there a risk with the llm changing logic which may slightly impact the results each time.
n
so there is a knob you can tweak here, which is
temperature
which is supported for most llms
its basically low temp -> more deterministic, high temp -> more "creative" (less predictable)
d
Ah bingo!
thank you