We are experiencing extremely slow task submission via the
DaskExecutor
for very large mapped tasks. With previous flow tests where a task was mapped over roughly 20K items, task submission was sufficiently fast that our Dask cluster scaled workers up to the worker limit. But with a task mapped over 400K items, the
DaskExecutor
task submission to the scheduler appears rate limited and there are never sufficient tasks on the scheduler it to create more workers and scale so we are stuck with the cluster crawling along with the minimum number of workers.
Sean Harkins
09/17/2021, 4:28 PM
Here is an example of a large mapped task
Sean Harkins
09/17/2021, 4:31 PM
And note the relatively small number of task which the scheduler has received. Normally the number of
cache_inputs
tasks should be growing very rapidly and the workers should be saturated forcing the cluster to scale but as you can see in the dashboard image below, the task submission to the scheduler is slow for some reason
a
Andrew Black
09/20/2021, 11:54 AM
Hi Sean, were you able to get an answer to resolve this?
Bring your towel and join one of the fastest growing data communities. Welcome to our second-generation open source orchestration platform, a completely rethought approach to dataflow automation.