<@ULVA73B9P> I’m using an ECS pool with a deployme...
# marvin-ai
a
@Marvin I’m using an ECS pool with a deployment concurrency limit set to 1. I noticed that when a run completes, the next flow stays in “AwaitingConcurrencySlot” for 5–10 minutes before moving to “Pending” instead of starting immediately. Which Prefect settings could cause this? It may relate to https://github.com/PrefectHQ/prefect/issues/18877
m
thought for 331.3 seconds
Short answer: the 5–10 minute pause is almost certainly the server-side backoff on concurrency checks, not anything in your ECS pool config. What’s happening - When a deployment-level concurrency limit is hit, Prefect does not re-check slot availability continuously. The run is put into a Scheduled state named “AwaitingConcurrencySlot” with a future scheduled_time. Only when that time is reached does it move to Pending and get submitted. - “AwaitingConcurrencySlot” is just a Scheduled state variant: states.py - The scheduling/backoff is set by the orchestration policy that secures flow concurrency slots: see policies like
SecureFlowConcurrencySlots
and
ReleaseFlowConcurrencySlots
in Prefect’s server orchestration code: core_policy Which settings affect this - Deployment concurrency limit itself (e.g., limit=1) triggers the backoff behavior. There is no user-facing setting to shorten the backoff cadence for deployment concurrency in Prefect Cloud today. - Global concurrency limits (if you are using them) can also hold runs in AwaitingConcurrencySlot; if using v2 global limits, the “slot decay” behavior controls how fast slots become available again. Docs: Global concurrency limits - ECS pool settings like
task_watch_poll_interval
or CloudWatch log settings only affect log streaming/status watching once a run is submitted; they do not influence how quickly a run leaves AwaitingConcurrencySlot. - Work pool/queue “concurrency limit” would gate submission and keep runs Pending; that’s a different path than “AwaitingConcurrencySlot.” Relation to the GitHub issue you linked - There is an open report of ECS push pools where runs “very slowly trickle to Pending” under load: #18877. That thread discusses Cloud-side submission cadence and late marking, which can make slow transitions more noticeable. Your specific symptom (AwaitingConcurrencySlot → 5–10 min → Pending) matches the server’s concurrency backoff pattern rather than an ECS misconfiguration. What you can do - If you want the next run to start immediately when the prior completes, raise or remove the deployment concurrency limit (e.g., move the gate to work pool concurrency or use a higher limit at the deployment). - If you are using global concurrency limits, review whether you’ve configured any tags/limits that would also put the run in AwaitingConcurrencySlot, and how aggressive you want the limiting to be: Docs on global limits - If the delay you’re seeing is much longer than ~5–10 minutes or grows over time, it could be related to the behavior in #18877. If so, sharing your workspace, deployment, and flow run IDs in that issue (or here) will help us inspect server logs. If you can confirm: - Are you on Prefect Cloud? - Any global concurrency limits (v2) configured on tags used by this deployment? - Roughly which Prefect version your deployments/workspace is on? With that, I can double-check whether anything else in your setup could be contributing to the wait.