I upgraded my Prefect Server from 0.15.2 -> 0.1...
# ask-community
j
I upgraded my Prefect Server from 0.15.2 -> 0.15.6, but as of the upgrade I can’t run any flows due following error:
Copy code
Failed to retrieve task state with error: ClientError([{'message': 'Expected type UUID!, found ""; Could not parse UUID: ', 'locations': [{'line': 2, 'column': 5}], 'path': ['get_or_create_task_run_info'], 'extensions': {'code': 'INTERNAL_SERVER_ERROR', 'exception': {'message': 'Expected type UUID!, found ""; Could not parse UUID: '}}}])
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/prefect/engine/cloud/task_runner.py", line 154, in initialize_run
    task_run_info = self.client.get_task_run_info(
  File "/usr/local/lib/python3.8/site-packages/prefect/client/client.py", line 1798, in get_task_run_info
    result = self.graphql(mutation)  # type: Any
  File "/usr/local/lib/python3.8/site-packages/prefect/client/client.py", line 569, in graphql
    raise ClientError(result["errors"])
prefect.exceptions.ClientError: [{'message': 'Expected type UUID!, found ""; Could not parse UUID: ', 'locations': [{'line': 2, 'column': 5}], 'path': ['get_or_create_task_run_info'], 'extensions': {'code': 'INTERNAL_SERVER_ERROR', 'exception': {'message': 'Expected type UUID!, found ""; Could not parse UUID: '}}}]
I found the following issue: https://github.com/PrefectHQ/prefect/issues/4687, but it says the root issue should be fixed. Note; this happens for all of my flows (of which all are relative small amount of tasks).
I’m currently downgrading my flows to 0.15.5 (while keeping server at .6) to see if it remedies 🤞
Okay that didn’t work. I just reverted backend to 0.15.2, and my flows as well. Now it works. So somewhere between .2 -> .6 there is a regression breaking all flow runs. Interestingly that this wasn’t raised earlier by other users... Currently don’t have time to investigate this my self, I might at the end of the week. For reference I’m running in Kubernetes with the provided helm chart.
k
Hey @Joël Luijmes, will ask the team about this
Did you re-register these and are you registering immediately after running?
j
Yes I upgraded the backend services first, and after that rebuild and reregister all flows with 0.15.6 installed.
k
Ah sorry I had typos. I meant, you do trigger a flow run right after registering? We added batch registration logic which may cause flows to take longer to fully register than before, which means there might be a delay between the registration call and when it fully gets registered. If you trigger a run in that time period, you can get an error like this.
j
Hmm no not really. I upgraded everything at Friday , and all flows are on a cron schedule. Of which all flow runs failed with that specific error.
k
Gotcha. Will still look around