https://prefect.io logo
Title
k

Konstantin

04/06/2022, 2:39 PM
Hi, team Prefect!
Failed to set task state with error: ClientError([{'message': 'State update failed for task run ID 7f12b68e-3a3b-45f6-9231-283d30105595: provided a running state but associated flow run 2582c918-369e-4422-ade8-811843f495cf is not in a running state.', 'locations': [{'line': 2, 'column': 5}], 'path': ['set_task_run_states'], 'extensions': {'code': 'INTERNAL_SERVER_ERROR', 'exception': {'message': 'State update failed for task run ID 7f12b68e-3a3b-45f6-9231-283d30105595: provided a running state but associated flow run 2582c918-369e-4422-ade8-811843f495cf is not in a running state.'}}}])
Traceback (most recent call last):
  File "/usr/local/lib/python3.9/site-packages/prefect/engine/cloud/task_runner.py", line 91, in call_runner_target_handlers
    state = self.client.set_task_run_state(
  File "/usr/local/lib/python3.9/site-packages/prefect/client/client.py", line 1839, in set_task_run_state
    result = self.graphql(
  File "/usr/local/lib/python3.9/site-packages/prefect/client/client.py", line 563, in graphql
    raise ClientError(result["errors"])
prefect.exceptions.ClientError: [{'message': 'State update failed for task run ID 7f12b68e-3a3b-45f6-9231-283d30105595: provided a running state but associated flow run 2582c918-369e-4422-ade8-811843f495cf is not in a running state.', 'locations': [{'line': 2, 'column': 5}], 'path': ['set_task_run_states'], 'extensions': {'code': 'INTERNAL_SERVER_ERROR', 'exception': {'message': 'State update failed for task run ID 7f12b68e-3a3b-45f6-9231-283d30105595: provided a running state but associated flow run 2582c918-369e-4422-ade8-811843f495cf is not in a running state.'}}}]
There is a problem with launching tasks in Prefect, on the screenshots, the last two unsuccessful launches returned an error (described above), the last launch was performed manually, ended in success.
👋 2
k

Kevin Kho

04/06/2022, 2:45 PM
Was this flow re-registered during a run? Or did maybe two agents pick up this flow?
k

Konstantin

04/06/2022, 2:49 PM
No, but I noticed that during registration, information was displayed in the console that the third version was registered, on the UI the fourth latest version. Could this be the reason?
k

Kevin Kho

04/06/2022, 2:57 PM
Not sure I follow, but yes I think registration can cause this. Can test more. What is your RunConfig?
k

Konstantin

04/06/2022, 3:09 PM
I didn’t quite understand the question
k

Kevin Kho

04/06/2022, 3:11 PM
Is the Flow running on Kubernetes/Docker/Local?
k

Konstantin

04/06/2022, 3:15 PM
in Docker
k

Kevin Kho

04/06/2022, 3:28 PM
Ok so I tried re-registering and the already running flow just ran to completion. I think the cause of this error is something like the flow is cancelled or was marked as failed in the UI but it’s still running. The flow has a terminal state in the database (not running), but the task is continuing and then tries to update its state, which throws this error saying it can’t update the state because the Flow is not running anymore
Did you mean to add screenshots btw?
k

Konstantin

04/06/2022, 6:09 PM
k

Kevin Kho

04/06/2022, 7:42 PM
The screen shots are not telling me so much. Are all of the failures you are showing related to Internal Server Error?
k

Konstantin

04/06/2022, 9:31 PM
No, Kevin not everyone is related to this error, the rest have an error related to a duplicate insert in the target storage
now deleted the flow and registered it again
Create Project ads_importing if not exist:
23ads_importing created
24Create Project clickhouse if not exist:
25clickhouse created
26Register file flows/clickhouse/file.py or update flow:
27Collecting flows...
28Processing 'flows/clickhouse/file.py':
29  Building `GitLab` storage...
30  Registering 'import_data from clickhouse in DWH'... Done
31  └── ID: 8359eee7-4bb6-4271-8a77-xxxxxxxxxxxx
32  └── Version: 1
33======================== 1 registered ========================
k

Kevin Kho

04/06/2022, 11:10 PM
Sorry I’m a bit lost now 😅. Are you showing me they are stuck in schedulling?
k

Konstantin

04/07/2022, 5:19 AM
No :) Look at the latest version in the UI and in the console. I'm sorry that I dispersed you
I completely deleted the flow, along with the history, and after that registered it as a new flow
k

Kevin Kho

04/07/2022, 5:29 AM
Ah I see what you mean
Is it affecting runs though? This isn’t normal behavior, but I could see it still working
k

Konstantin

04/07/2022, 2:30 PM
Umm, that's something I'll have to find out. Kevin say, Prefect server/agent in RunConfig Docker application uses UTC and GMT + TZ to capture actions?
k

Kevin Kho

04/07/2022, 2:33 PM
Prefect backend is all in UTC, and then the UI converts to the timezone you are in. But you can also give Prefect timezone-aware timestamps
k

Konstantin

04/08/2022, 1:01 PM
The transaction of the first launch in target DWH is now rolling back the transaction
pg_stat_activity:
starting manually after unsuccessful starts
second launch after unsuccessful ones, launched according to the scheduler
a

Anna Geller

04/08/2022, 1:57 PM
My Russian is a bit rusty 😅 can you describe your problem in English?
or German or Polish 😄
k

Konstantin

04/08/2022, 1:58 PM
sorry
thank you
a

Anna Geller

04/08/2022, 2:03 PM
generally speaking, I saw a similar issue with long-running jobs. Do you happen to run some subprocess or Databricks job within your flow that fails with the original error message? Could you share your flow code?
If you need an explanation for why you see this issue and what you can do about it, check out this Discourse topic
k

Konstantin

04/08/2022, 2:59 PM
Anna, wir verwenden Perfect Running Docker-Container in Kubernetes
I read the article, the question has matured, in the case when everything works in the docker what to substitute instead of PREFECT_CLOUD__HEARTBEAT_MODE. Or is this option universal?
k

Kevin Kho

04/08/2022, 3:26 PM
If you want to turn off heartbeats, I suggest adding it on the Flow RunConfig,
flow.run_config = KubernetesRun(...,env={"PREFECT_CLOUD__HEARTBEAT_MODE": "thread"})
:upvote: 1
k

Konstantin

04/09/2022, 4:17 PM
it worked, thanks guys!
🙌 2