I normally use something like this but I am trying to wrap my mind around the concept of these Prefect mapped tasks which I understand could be executed in parallel in separate processes. In that case the rate limit would not be applied across all tasks but within each process.
I have an Airflow dag that has to call an API 20k times. The problem that I have now is that a single task makes these 25k API calls but if that task fails, I have to repeat all the calls.
I was thinking that in Prefect this could be a mapped task. In that case the failure of a single API call would result in just that task failing and retrying but I am wondering about the rate limit issue.
My code has retry logic that handles transient errors so this may not be the best example of a potential task failure, but in general I am wondering how you'd go about limiting concurrency and rate in a distributed environment without adding something like redis to the mix.