Hi <@UKRGNNJLT> posting here just so my response i...
# prefect-community
c
Hi @Chris Hart posting here just so my response is visible: Dask is absolutely still our recommended executor for parallel / distributed workloads. The re-run behavior described above only occurs if a dask worker dies, which is typically caused by a resource constraint. Additionally, if the workflow is running on Cloud we very explicitly prevent task reruns so it’s not an issue at all (other than some noisy logs). For this reason we recommend users understand the memory requirements of their tasks and flows. That being said, I do plan on opening an issue on the distributed repository to try and prevent this behavior since it is annoying.
😎 1
c
awesome thanks! not sure where I got that impression then, sry
c
yea no worries at all!
c
we will be porting to Dask soon (once mappings are working in the pagination case.. which seems to maybe not require the task loop issue after all 🀞)
c
very cool! Definitely let us know if you have any questions setting it up / running Prefect on your clusters. Glad to hear mapping might work -> either way, the looping feature will be implemented soon πŸ˜‰