Hey everyone - I’m trying to figure out what would be a good architecture on AWS for running a large number of flows in a burst (ex: X00,000 flows run once a week, broken into smaller batches). Ideally I’d want the backing infrastructure that these flows run on to be ephemeral, so it seems like I could use any of the following to do this:
• Spinning up more agents temporarily (current plan)
• Kubernetes jobs
• ECS (?)
• Dask + Fargate (?)
◦ I know Dask parallelism operates at task level rather than flow level
I’m wondering if anyone here has similar use cases. If so, what works well for you?