Nat Taylor
10/04/2024, 7:53 PMNat Taylor
10/04/2024, 7:54 PMNat Taylor
10/04/2024, 8:15 PMimport logging
logging.getLogger("controlflow").setLevel(logging.DEBUG)
logging.getLogger("openai").setLevel(logging.DEBUG)
I was also pleased that langtrace worked with no hiccups via this, to get a nice UI
from langtrace_python_sdk import langtrace
langtrace.init(**config)
Nat Taylor
10/04/2024, 8:15 PMJeremiah
Nat Taylor
10/04/2024, 8:34 PMJeremiah
Jeremiah
Jeremiah
Nat Taylor
10/04/2024, 8:38 PMJeremiah
Jeremiah
@cf.flow
def podcast():
lines = []
while True:
next_line = bill.run("generate the next line in the podcast")
lines.append(next_line)
next_line = hillary.run("generate the next line in the podcast")
lines.append(next_line)
# break somehow
return '\n\n'.join(lines)
Jeremiah
Jeremiah
Jeremiah
Jeremiah
Dave Aitel
10/04/2024, 10:29 PMNat Taylor
10/05/2024, 1:45 AM...it...it...it...it...it...it...
What kind of subjects do you remember? I remember. I remember. I remember. I remember. I remember. I remember. I remember. I remember. I remember. I remember. I remember. I remember. I remember. I remember. I remember. I remember. I remember. I remember. I remember. I remember. I remember. I remember. I remember. I remember. I remember. I remember. I remember.
I was like, oh, I'm sorry. I was like, oh, I'm sorry. I was like, oh, I'm sorry. I was like, oh, I'm sorry. I was like, oh, I'm sorry. I was like, oh, I'm sorry. I was like, oh, I'm sorry. I was like, oh, I'm sorry.
They were for women. And they were for women. And they were for women. And they were for women. And they were for women. And they were for women. And they were for women. And they were for women. And they were for women. And they were for women. And they were for women. And they were for women. And they were for women. And they were for women.This text is the output of mlx-whisper, so maybe its a strange feedback cycle where the model gets tripped up during the transcription, then also tripped up as input tokens (although I don't have any hypotheses on what "tripped up" really means.)
Nat Taylor
10/05/2024, 1:45 AMNat Taylor
10/05/2024, 2:56 AMopenai.APIError: The model produced invalid content. Consider modifying your prompt if you are seeing this error persistently.
at least in my case it was often (always?) the result of an EMPTY response from the API, so I followed this thread: https://community.openai.com/t/empty-text-in-the-response-from-the-api-after-few-calls/2067/4 which says a space/newline at the end of the prompt can cause issues.
So I added the string "Please follow the instructions." to the end of llm_instructions.jinja
as a lazy way to avoid trailing newlines and I haven't hit the error since.
Maybe there's an opportunity to add a strip()
around the prompt as an experiment?Nat Taylor
10/05/2024, 3:32 AMJeremiah
Dave Aitel
10/08/2024, 11:33 PMDave Aitel
10/08/2024, 11:36 PMNat Taylor
10/09/2024, 7:32 PMDave Aitel
10/09/2024, 7:43 PMJason
10/16/2024, 3:58 PMDave Aitel
10/16/2024, 4:04 PMJeremiah
Jason
10/16/2024, 4:37 PMDave Aitel
10/16/2024, 4:43 PMJeremiah