Hey everyone! :wave: Having a concurrency nightmar...
# ask-community
h
Hey everyone! 👋 Having a concurrency nightmare with Prefect 3.4.11 and need help. Problem: 70-80 flows running when deployment limits should cap at 20 total (two deployments: one with limit 13, one with limit 7 both run on same work pool). My setup: • Prefect Server 3.4.11 in Docker on EC2 • PostgreSQL on RDS • ECS work pool (push work pool, spawns Fargate tasks per flow) • FastAPI app triggering flows programmatically using
create_flow_run_from_deployment()
What's happening: • Deployment YAML has
concurrency_limit: 13
and
concurrency_limit: 7
configured • ECS keeps spawning Fargate tasks without respecting limits I have tried adding a concurrency limit on work pool instead of deployments but that is causing the runs to not even get into pending/running state they are stuck in late. So ideally I need a solution where the concurrency limits are respected.
a
Hey @Harshith Gogineni! Those concurrency limits on your deployments should be respected, so there must be something going wrong. Can you check if the concurrency limits for your deployments are getting updated while flow runs are executing? You can check by running
prefect global-concurrency-limit ls
, which should show you the occupied slots for your concurrency limits at any given point.
h
Yes that seems to get updated.
Any help here? I just notice that even for a single deployment the number of concurrent running tasks is surpassing the concurrency limit.
a
What version of Prefect is your worker running?
h
prefecthq/prefect:3.4.17-python3.12 this is the image being used for running the worker
a
Cool, and are you running a single instance of your server?
h
Yes only a single instance is running
a
Hmm, can you show me the output of
prefect global-concurrency-limit ls
when the limits are being exceeded?
h
The issue seems to be resolved now earlier I thought my server was at 3.4.11 but worker in 3.4.24 now upgraded the server to 3.4.24 and since then the system is stable.
a
Wonderful! I'm glad you were able to find a solution!