Ah that's good to hear, and thanks for the quick reply! 🙂 There are a couple of use cases I have in mind, though right now I'm focusing on a case where we may need to retry a task after an extended waiting period e.g. 15h. For example: a service I'm using might impose a quota of 10000 requests per unique identifier per day. Each of my unique identifiers corresponds to a single device in my fleet of ocean wave height sensors, and I'm attempting to load several years' worth of historical data (structured as hourly dump files from another system) into the service as quickly as possible. Our wave sensors only intensively collect data when the waves reach a certain height within a given time period, so during any given week we may have very little data or maybe a whole lot of data from a given sensor. We read from the dump files and send batches of lines to the system, and every so often we will receive a 429/"ID quota exceeded for identifier xyz" error. If that happens, we should pause processing of that file for several hours to avoid spamming their system with requests they won't accept. Note that other files can continue to be processed in parallel, though if they happen to contain records from the same identifier then they'll get throttled as well (and should also be paused).