Apologies if something like this has been asked be...
# ask-community
j
Apologies if something like this has been asked before. I'm developing a flow in 1.x with Cloud that will need to adhere to API rate limiting (e.g. 1 request per second). Is there a first class executor agnostic way of doing this? I will be making the http requests in mapped tasks. The best I've come up with is to use https://pyratelimiter.readthedocs.io/en/latest/ to create a queue thats persisted to some sort of storage backend (like sqlite if only using a local dask executor). Each mapped task that needs to make an api request will wait to acquire a slot in the queue before proceeding
k
We have flow level and task level concurrency limiting
j
This is different than concurrency, i need to rate limit not just prevent more than N tasks from running at a given time
For example, even If i limit the concurrency to a single task at a time I could still blow through the api rate limits imposed on me
k
Ah I know what you are saying. So no other way than the task run concurrency (and to some extent, retries with retry delay)
j
Right, I could just retry on 429 but it still seems somewhat naive.