https://prefect.io logo
Title
d

Devin McCabe

11/21/2022, 8:54 PM
Does anyone know why a mapped task might indicate "Ready to proceed with mapping" but it never proceeds? It works fine with LocalExecutor but not with DaskExecutor (via FargateCluster). I'm really frustrated because I know I've encountered and solved this exact issue before...
There is nothing in Dask scheduler/worker logs until the scheduler realizes it's been idle and shuts everything down. All dependent tasks to my mapped task were successful according to prefect cloud.
m

Mason Menges

11/23/2022, 8:30 PM
Hey @Devin McCabe Would you be able to provided a Simplified reproducible example of the flow that's running? Nothing Immediately comes to mind as to what could be causing this, it's possible the workers don't have enough memory to run the tasks but I'd normally expect their to be some logs on the dask side to indicate that.
b

Bianca Hoch

11/23/2022, 8:33 PM
Hello Devin, I'm going to throw some resources you way as well. • How does Prefect send work to a DaskExecutor and handle memoryHow can I configure my flow to run with Dask
d

Devin McCabe

11/28/2022, 2:55 PM
Thanks, it turns out that the mapped task was very briefly running out of memory and the scheduler was killing it. My experience with Dask has been generally that it doesn't always generate appropriate (or any) logs when the scheduler kills or respawns workers due to timeouts or capacity issues.