was moved into a new module (
), that is not available in 2.0b4 - even though it is available in the orion branch in the repo. I created an issue at GH also: https://github.com/PrefectHQ/prefect/issues/5787
when retrying tasks run on dask. is there special configuration i need to do for a retry with dask?
Error during execution of task: KeyError(<Thread(Dask-Default-Threads-12-578, started daemon 140412823688960)>)
Exception raised while calling state handlers: SystemError('unknown opcode') Traceback (most recent call last): File "/mnt/data/prefect/venv/lib/python3.8/site-packages/prefect/engine/cloud/flow_runner.py", line 119, in call_runner_target_handlers new_state = super().call_runner_target_handlers( File "/mnt/data/prefect/venv/lib/python3.8/site-packages/prefect/engine/flow_runner.py", line 116, in call_runner_target_handlers new_state = handler(self.flow, old_state, new_state) or new_state File "/mnt/data/prefect/venv3.10/lib/python3.10/site-packages/prefect/utilities/notifications/notifications.py", line 65, in state_handler def state_handler( SystemError: unknown opcode
) tasks. One of these steps is followed up with a
, which has always worked without issue until today. Now it's throwing this error:
. I'm not seeing how this could be, since I can see in the logs that the upstream task did in fact finish successfully. I tried explicitly setting the result of
ValueError: The task result cannot be loaded if it is not finished
as an upstream dependency of
(which i think shouldn't be needed), and I also tried setting the
, but still no luck. Does anyone have any ideas?
I have already defined the s3 bucket as the storage in prior steps and make sure to even reset it as the default before hand. I have no problem creating the deployment locally only an issue when running it on github actions.
You have not configured default storage on the server or set a storage to use for this deployment but this deployment is using a Kubernetes flow runner which requires remote storage.
import asyncio import pendulum from datetime import timedelta from prefect.orion.schemas.schedules import IntervalSchedule winter_schedule = IntervalSchedule( interval=timedelta(hours=24), anchor_date=pendulum.datetime(2022, 1, 1, 0, 30, 0, tz="Europe/Copenhagen") ) summer_schedule = IntervalSchedule( interval=timedelta(hours=24), anchor_date=pendulum.datetime(2022, 4, 1, 0, 30, 0, tz="Europe/Copenhagen") ) print(asyncio.run(winter_schedule.get_dates(1))) print(asyncio.run(summer_schedule.get_dates(1))) >>> "2022-05-16T01:30:00+02:00" >>> "2022-05-16T00:30:00+02:00"