Chris White

    Chris White

    3 years ago
    Hi @Chris Hart posting here just so my response is visible: Dask is absolutely still our recommended executor for parallel / distributed workloads. The re-run behavior described above only occurs if a dask worker dies, which is typically caused by a resource constraint. Additionally, if the workflow is running on Cloud we very explicitly prevent task reruns so it’s not an issue at all (other than some noisy logs). For this reason we recommend users understand the memory requirements of their tasks and flows. That being said, I do plan on opening an issue on the distributed repository to try and prevent this behavior since it is annoying.
    Chris Hart

    Chris Hart

    3 years ago
    awesome thanks! not sure where I got that impression then, sry
    Chris White

    Chris White

    3 years ago
    yea no worries at all!
    Chris Hart

    Chris Hart

    3 years ago
    we will be porting to Dask soon (once mappings are working in the pagination case.. which seems to maybe not require the task loop issue after all 🀞)
    Chris White

    Chris White

    3 years ago
    very cool! Definitely let us know if you have any questions setting it up / running Prefect on your clusters. Glad to hear mapping might work -> either way, the looping feature will be implemented soon πŸ˜‰