Hi @Chris Hart posting here just so my response is visible: Dask is absolutely still our recommended executor for parallel / distributed workloads. The re-run behavior described above only occurs if a dask worker dies, which is typically caused by a resource constraint. Additionally, if the workflow is running on Cloud we very explicitly prevent task reruns so itβs not an issue at all (other than some noisy logs). For this reason we recommend users understand the memory requirements of their tasks and flows. That being said, I do plan on opening an issue on the distributed repository to try and prevent this behavior since it is annoying.