Does anyone know how to optimize a mapping task wi...
# ask-community
j
Does anyone know how to optimize a mapping task with a 5000+ list of values. The task stalls ands all the memory on the dask worker
j
Hmm interesting, 5000 doesn’t seem like that large of a workload. Couple questions: • Where is this dask cluster running, how many workers, what do the resources of that machine look like? • What kind of data are you mapping over? Large objects, simple strings, etc.