HI all, My flows seem to be timing out in some way...
# ask-community
m
HI all, My flows seem to be timing out in some way. Has anyone seen an error similar to
Copy code
15:40:34.109 | ERROR   | Task run '<name>' - Crash detected! Execution was interrupted by an unexpected exception: PrefectHTTPStatusError: Client error '404 Not Found' for url '<https://api.prefect.cloud/api/accounts/***/workspaces/***/task_runs/***/set_state>'
z
Looks like an authentication / permissions issue — what kind of token are you using?
m
Just using a standard API key (that never expires) The flow runs for a good 10-15 minutes before seeing this issue. It is executing a for loop from within a task. It outputs a logger message with each iteration and gets many iterations in before seeing this message. The strange thing is this happens both locally and in cloud. If the flow is triggered from within the UI, it eventually just hangs. If it run the flow locally, i see this error.
z
Can you share a minimal example?
m
Not entirely. The code is querying an api iteratively. This api querying is done inside of a function that is called from inside of a task. It looks like the task is timing out before the function completes
@Zanie another piece of key info i forgot to mention. The flow that is failing is being kicked off as a subflow.
Copy code
def get_response(url):
  while some condition:
    resp = <query_api>
  return responses

@task
def some_task():
  return get_respons(url)

@flow
def my_subflow():
  some_task()

@flow
def foo():
  my_sub_flow()
As an additional follow up. I know there are settings pertaining to whether or not data is persisted once execution is halted (commonly seen when trying to retry a failed flow run), but in this specific case, the subflow hangs (continues to run) while the parent flow is halted in a
crashed
state. What i can not figure out is why the subflow continues to be shown as running in the UI, why the logs are not getting persisted, and why the subflow does not appear to be running on our pods. It is as if there is a ghost process that never got a stop signal executing on the prefect ui backend
z
The subflow isn’t running in a separate process or anything it sounds like the main flow is crashing in a way that prevents the subflow from being reported as crashed.
m
Can you explain what the benefit of not persisting state is? Wby is that the default and what are the implications of enabling it
z
Are you talking about persisting results?
m
Yes. But it looks like this is overflowing into logs in the subflow. If i run the subflow directly through its deployment, the logger is not being persisted, and neither is state. For instance
Where, locally, I would see the logs in real time. But i do understand results are not persisted by default. That is not my issue here
It looks like when the flow crashes, if the logs do appear, these is an indication prefect has lost track of the parent flow. The logs will show
Copy code
Downloading flow code from storage at '<parent flow name>'
And the subflows do all run if triggered from their deployment directly