https://prefect.io logo
Title
c

Chris

08/08/2019, 10:09 AM
Hey, I’ve noticed that unused allocated memory isn’t freed after a task/flow is run, so if I run a scheduled flow which contains a memory-intensive task, that memory is constantly allocated from the first time the flow runs. I’ve found some workarounds (manually deleting variables at the end of a task’s run function), but this doesn’t always work (e.g. if the output of one task is passed as input to another). Is there a better workaround? I also have a similar issue when running large-scale flows with the Dask executor. It seems like memory is not freed between tasks - I found this relevant issue https://github.com/dask/dask/issues/3247 which suggests that the pool used by a Dask worker to complete a task is not closed after each task. This causes issues with large-scale flows as even if I split the data into small chunks and use these as mapped args to a task, the leaked memory accumulates with each task run and ends up causing workers to die. Has anyone experienced anything similar?
c

Chris White

08/08/2019, 2:47 PM
How many tasks does your Flow have and how large of data are they returning? I experienced something similar once but realized my tasks were returning data of size ~4MB and I had ~12,000 of them so that wasn’t feasible. I ended up finding a workaround and my Flow ran just fine
c

Chris

08/09/2019, 9:27 AM
Thanks for the reply, the tasks don’t return any data (rather, each task writes a chunk of data to disk), and there are ~1000 of them (running with a cluster of 60 workers).
m

Mike

08/15/2019, 7:23 AM
Has there been any update on this? I’m having the same problem
c

Chris White

08/16/2019, 12:03 AM
I haven’t had time to dig into this on our side, but for what it’s worth I doubt this is a prefect specific issue; is there any chance this is a memory leak in a dependency you’re using ( dask array / numpy) and are you on the most up-to-date version of dask? Anecdotally, I’ve run some very large flows on dask quite recently with no issue