https://prefect.io logo
Title
k

Khyaati Jindal

02/01/2023, 4:00 AM
Hi everyone , I am getting this error : crash detected but I am not able to source it down the reason
Executing 'OQCEA18L2-8a17dcaa-0' immediately...
08:33:57 AM
Crash detected! Execution was cancelled by the runtime environment.
09:00:19 AM
OQCEA18L2-8a17dcaa-0
Encountered exception during execution:
Traceback (most recent call last):
  File "/usr/lib/python3.10/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/lib/python3.10/asyncio/base_events.py", line 633, in run_until_complete
    self.run_forever()
  File "/usr/lib/python3.10/asyncio/base_events.py", line 600, in run_forever
    self._run_once()
  File "/usr/lib/python3.10/asyncio/base_events.py", line 1860, in _run_once
    event_list = self._selector.select(timeout)
  File "/usr/lib/python3.10/selectors.py", line 469, in select
    fd_event_list = self._selector.poll(timeout, max_ev)
  File "/home/ubuntu/common_env/lib/python3.10/site-packages/prefect/engine.py", line 1614, in cancel_flow_run
    raise TerminationSignal(signal=signal.SIGTERM)
prefect.exceptions.TerminationSignal

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/ubuntu/common_env/lib/python3.10/site-packages/prefect/engine.py", line 643, in orchestrate_flow_run
    result = await run_sync(flow_call)
  File "/home/ubuntu/common_env/lib/python3.10/site-packages/prefect/utilities/asyncutils.py", line 165, in run_sync_in_interruptible_worker_thread
    assert result is not NotSet
AssertionError
This flow is in 'running state' for about an hour now
Also I am trying to delete the flows from the ui that are in running state for more then an hour, and I get a prompt saying flow run failed to delete
the log in my agent have this
Engine execution of flow run '073fa891-d461-47d9-81fe-805dc026cc74' aborted by orchestrator: This run cannot transition to the RUNNING state from the RUNNING state.
a

Aleksandr Liadov

02/10/2023, 1:32 PM
Hello @Khyaati Jindal I have the same thing, did you find solution/bypassing?
👀 1
k

Khyaati Jindal

02/10/2023, 1:33 PM
Not really, just deleted everything and re deployed
New work queue too
y

Yaron Levi

02/15/2023, 7:28 PM
@Aleksandr Liadov @Khyaati Jindal Happens to me as well. I’ll open a ticket about the issue.
a

Aleksandr Liadov

02/16/2023, 8:12 AM
@Yaron Levi, @Khyaati Jindal I have almost zero problem now... I do switch to RayTaskRunner (local, non cluster) It seems to me wrapper around ray runner works really stable!
y

Yaron Levi

02/16/2023, 9:29 AM
@Aleksandr Liadov Interesting. So basically you are saying that instead of DaskTaskRunner(), just use RayDaskRunner() and you get generally more stable runs?
✅ 1
a

Aleksandr Liadov

02/16/2023, 9:38 AM
@Yaron Levi yes, RayTaskRunner in our case (1 main flow with 2 subflow, one subflow has a lot of cpu bound tasks, so we run it with iRay) is the most stable. I can run 150 flows ( 450 flows if I count subflows) without any crash. I run it with k8s job Before, I only could run 20 flows with dask task runner, and only 50 with concurent task runner (but without parallelisation of cpu tasks)
y

Yaron Levi

02/16/2023, 9:46 AM
Thanks, very interesting…. I’ll try it.
a

Aleksandr Liadov

02/16/2023, 9:52 AM
@Yaron Levi write after, if it helped you 😉
✅ 1