https://prefect.io logo
Title
c

Chris White

07/29/2019, 5:44 PM
Hi @Chris Hart posting here just so my response is visible: Dask is absolutely still our recommended executor for parallel / distributed workloads. The re-run behavior described above only occurs if a dask worker dies, which is typically caused by a resource constraint. Additionally, if the workflow is running on Cloud we very explicitly prevent task reruns so it’s not an issue at all (other than some noisy logs). For this reason we recommend users understand the memory requirements of their tasks and flows. That being said, I do plan on opening an issue on the distributed repository to try and prevent this behavior since it is annoying.
😎 1
c

Chris Hart

07/29/2019, 5:45 PM
awesome thanks! not sure where I got that impression then, sry
c

Chris White

07/29/2019, 5:45 PM
yea no worries at all!
c

Chris Hart

07/29/2019, 5:46 PM
we will be porting to Dask soon (once mappings are working in the pagination case.. which seems to maybe not require the task loop issue after all 🀞)
c

Chris White

07/29/2019, 5:52 PM
very cool! Definitely let us know if you have any questions setting it up / running Prefect on your clusters. Glad to hear mapping might work -> either way, the looping feature will be implemented soon πŸ˜‰