hello. can i spec a arg to restart dask worker every hours and scatter the work's in-memory data to cluster before restart the worker ? i found my dask worker will oom when a long execution time, i can't find the mem-leak reason and so i have to restart works by a period of time, but when i restart worker it will loss the in-memory data and it will re-compute them, this waste time a lot and will make many mistakes.
Bring your towel and join one of the fastest growing data communities. Welcome to our second-generation open source orchestration platform, a completely rethought approach to dataflow automation.