I have a problem with the single-node job I'm running; it's chewing up lots of memory because it does all mapping sequentially starting with top-most tasks. A depth-first traversal of the DAG would be much more memory efficient for us since the "vertical" at the "same index" of the mapping has a full set of data in memory. Is there any way to make Prefect do a depth-first search instead?
s
Sylvain Hazard
10/29/2021, 3:03 PM
IIRC both the
DaskExecutor
and the
LocalDaskExecutor
perform Depth First Execution if you chain mapped tasks.
upvote 2
k
Kevin Kho
10/29/2021, 3:03 PM
Using the DaskExecutor will prefer depth-first execution. But the LocalExecutor can’t be forced unless you rearchitect your flow somehow like using subflows or task looping
Bring your towel and join one of the fastest growing data communities. Welcome to our second-generation open source orchestration platform, a completely rethought approach to dataflow automation.