Hmm interesting, 5000 doesn’t seem like that large of a workload. Couple questions:
• Where is this dask cluster running, how many workers, what do the resources of that machine look like?
• What kind of data are you mapping over? Large objects, simple strings, etc.