Sandeep Aggarwal
06/11/2020, 12:31 PMTraceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/graphql/execution/execute.py", line 668, in complete_value_catching_error
return_type, field_nodes, info, path, result
File "/usr/local/lib/python3.7/site-packages/graphql/execution/execute.py", line 733, in complete_value
raise result
File "/prefect-server/src/prefect_server/graphql/states.py", line 73, in set_state
task_run_id=state_input["task_run_id"], state=state,
File "/prefect-server/src/prefect_server/api/states.py", line 91, in set_task_run_state
f"State update failed for task run ID {task_run_id}: provided "
graphql.error.graphql_error.GraphQLError: State update failed for task run ID 63293e14-b1d4-4d2e-ae21-e9aeb8edfade: provided a running state but associated flow run 73a41de3-adc3-4a48-9b57-9b7bdb6094f7 is not in a running state.
So my workflow involves running some commands inside docker containers. The workflow itself aren't huge but the docker execution can take several seconds (should be under 1min though). I am currently running with couple of dask workers with limited memory i.e. 500MB.
Workflow works fine for small no. of requests but as I start hitting multiple requests, workers starts dying and I see this error in logs prefect server logs.
Although this is just a testing system and actual prod environment will have higher memory limits but still would like to know if this error is expected and if there is any way to avoid/handle this?Laura Lorenz (she/her)
06/11/2020, 1:33 PMSandeep Aggarwal
06/11/2020, 4:30 PMdistributed.worker - WARNING - Memory use is high but worker has no data to store to disk. Perhaps some other process is leaking memory? Process memory: 271.47 MB -- Worker memory limit: 314.57 MB
The nanny process restarts the worker and the tasks that were stucked fails with above error.
Apart from that I don't see any specific error in any of the logs.
I will try to reproduce the error and see if I can find anything useful.Laura Lorenz (she/her)
06/11/2020, 5:00 PMSandeep Aggarwal
06/13/2020, 7:54 AM