A question for the prefect experts - when using a ...
# ask-community
t
A question for the prefect experts - when using a Dask task runner I understand that something akin to a DAG is submitted through to the dask schedular for execution up front. Then the dask schedular (in my case the distributed scheduler) handles coordinating the work across the set of established dask workers. How does this work in the DAG-less sense though? Particularly at the moments where prefect is waiting for an event / result before submitting more work? Importantly, what happens to dependencies across these boundaries / blocking stages? I am asking this partly to improve my own understanding, partly trying to understand why the dask scheduler is re-executing completed tasks (tasks completed successfully mind you), in an adapative cluster setting. I am wondering whether it is a case that key->value mapping from previously executed delayed functions / tasks are not being transferred between workers when one shuts gracefully shuts down. I could understand this happening if the dask scheduler does not expect the key->value result to be required later