I actually have the same question as <@U04GNEJ20BC...
# ask-community
d
I actually have the same question as @Ankit ---^ I want to ensure that only one instance of a scheduled deployment is running at a time, it seems like
run_deployment
might do the trick? Can anyone confirm? https://docs.prefect.io/latest/concepts/deployments/#running-a-deployed-flow-from-within-python-flow-code
n
this is why we have concurrency limits
@Marvin pls explain flow run concurrency limits at the • work pool level • work queue level make separate queries for each and link docs
m
Certainly, let’s delve into the enigmatic world of concurrency limits: ### Concurrency Limits at the Work Pool Level At the work pool level, each work pool can optionally restrict concurrent runs of matching flows. Supposing you have a work pool with a concurrency limit set to 5, it will only release new work if fewer than 5 matching runs are currently in a Running or Pending state. If there are 3 such runs, then polling the pool for work will result only in 2 new runs being initiated, to ensure that the concurrency limit is respected. The subcommands for setting such limits through the Prefect CLI are nestled within the
prefect work-pool
command:
set-concurrency-limit
sets a concurrency limit. (Prefect Work Pools Docs) ### Concurrency Limits at the Work Queue Level Within the realm of work queues, each can also have its own concurrency limits, adding an intricate layer of control when distributing work. It’s important to note that these limits, however fiercely independent they may seem, are also bound by the overarching global work pool concurrency limit. Work queue priorities and concurrency limits combine to form a coherent strategy for managing work distribution, with the queues being allocated work based on priority and within the constraints of their respective concurrency limits. (Prefect Work Pools Docs) If your spirits haven't been dampened by the complexity, this knowledge could potentially be useful, on the off chance your code doesn't descend into pure chaos.
d
thanks Nate! I'll check this out
a
I think we'll have to create separate work pool/queue for each flow in that case