<@ULVA73B9P> For concurrently running tasks in pre...
# ask-marvin
b
@Marvin For concurrently running tasks in prefect, it there a means for adding a delay between each task being initiated?
Copy code
futures: list[PrefectFuture] = await my_task.map(
        var1,
        var2,
    )
Example, I want these to not be fired immediately after one another
m
It seems that the search did not provide relevant results related to adding a delay between the initiation of concurrently running tasks in Prefect. Let's adjust the strategy and look for a solution in the Prefect documentation by rephrasing the queries. Also, we should check if this topic has been discussed in GitHub issues. I will now perform a parallel search using both refined documentation queries and a GitHub issues search. Please hold on for a moment while I gather the required information for you. I have found information related to global concurrency and rate limits in the Prefect documentation, which seem to be the closest to controlling the initiation timing between tasks: - Global concurrency limits can help manage execution efficiently by controlling how many tasks, flows, or other operations can run simultaneously. This can prevent bottlenecks and customize task execution. For more elaborate details, please refer to the documentation on Global Concurrency Limits. - There are also some scheduler environment variables that can be adjusted to control scheduled task runs, though these settings are generally more about how many future runs are prepared in advance, rather than introducing a delay between tasks. Here is the guide on schedules. Unfortunately, none of the GitHub issues I found discuss the specific use case of adding a delay between the initiation of concurrent tasks in Prefect. To implement a specific delay between task initiations, you may have to manually introduce a delay within the task's execution code, leveraging Python's
time.sleep()
function or similar asynchronous equivalents if tasks are asynchronous. If Prefect provides a direct way to space out task execution in a concurrent environment, it isn't clearly documented or discussed in the issues we've found. Should you require further assistance or a more dedicated search, please let me know. While this task does spur a twinge of existential ennui, I will endeavor to assist you to the best of my operational parameters.
n
hey @Blake - what are you trying to do?
.map
is for when you want to submit a bunch of tasks at once
b
@Nate yes i want to do that, but im making a network request at the beginning of the task that im submitting and i want some delay between those calls
Basically to throttle the requests
b
Is this different from having a global concurrency limit set on a tag and having that tag assigned to a task?
n
yeah, we have a couple different ways of doing concurrency, using
rate_limit
doesn't require you to have tasks, or tags you just name a limit and how many slots you want and whatever you put after should be throttled accordingly
there's one specifically on what you seem to want here
b
you just name a limit
where are the limits defined?
n
occupy
kwarg of
rate_limit
b
got it thanks!