Mehdi Lallouache
09/04/2025, 8:30 PMMarvin
09/04/2025, 8:30 PMMarvin
09/04/2025, 8:35 PMMehdi Lallouache
09/05/2025, 7:00 AMMarvin
09/05/2025, 7:00 AMMarvin
09/05/2025, 7:08 AMprefect version
prefect deployment inspect "<flow_name>/<deployment_name>"
- Look for any “active” runs the system still sees:
prefect flow-run ls --flow-name "<flow_name>" --state RUNNING --limit 50
prefect flow-run ls --flow-name "<flow_name>" --state CANCELLING --limit 50
If you find stale runs, try cancelling them:
prefect flow-run cancel "<flow_run_id>"
- Check for any concurrency limits and stuck slots (even if you didn’t set one intentionally):
prefect concurrency-limit ls
If you see a limit that could apply to this flow, inspect it (this will show active_slots):
prefect concurrency-limit inspect "<tag>"
If you find a slot that looks stuck, reset it:
prefect concurrency-limit reset "<tag>"
Where to look in the UI
- Open the deployment in the UI and review its “run behavior”/overlap settings. If “prevent overlapping runs” or “cancel overlapping runs” is on, that would explain the cancellations while Prefect still thinks a prior run is active.
- Check the Flow Runs page for any run that’s still RUNNING/CANCELLING and older than you expect.
Helpful docs
- Global and task-level concurrency limits: Docs: Global concurrency limits
- CLI for concurrency limits (create/inspect/reset): CLI: concurrency-limit
A few clarifying questions to help me pinpoint this:
- Are you on Prefect Cloud or self-hosted Prefect Server? Which Prefect version (prefect version
)?
- Which worker type are you using (Local, Docker, Kubernetes) and its version?
- Where exactly did you set “local concurrency limit of 1 per deployment”? In the deployment’s settings (prevent overlapping/max active runs), in a global concurrency limit/tag, or in worker/pool concurrency?
- For one of the cancelled runs: what is the exact state message in the UI? Does it mention overlap/cancellation due to another active run?
- Do you see any RUNNING or CANCELLING runs for this flow that are older than they should be?
If you can share one affected deployment name and a couple of flow run IDs (successful first run and a cancelled subsequent run), I can suggest the next exact step (and the exact tag to reset if a concurrency limit is involved).Mehdi Lallouache
09/05/2025, 7:21 AMMarvin
09/05/2025, 7:21 AMMarvin
09/05/2025, 7:44 AMprefect concurrency-limit ls
2) Inspect the limit to see which run is holding the slot
prefect concurrency-limit inspect "<tag>"
- Look at active_slots
— it will list TaskRun IDs holding the slot. If those runs are already COMPLETED, the slot is stuck.
3) Reset the stuck slots
prefect concurrency-limit reset "<tag>"
4) Verify no stale runs are still considered active
prefect flow-run ls --flow-name "<flow_name>" --state RUNNING --limit 50
prefect flow-run ls --flow-name "<flow_name>" --state CANCELLING --limit 50
Cancel any stragglers if needed:
prefect flow-run cancel "<flow_run_id>"
Why this persists after redeploy
- Concurrency limits are stored centrally and enforced by tag; they’re not cleared by deleting/recreating a deployment. Resetting the limit is the right fix when a slot gets stuck.
Docs for reference:
- Concurrency limits (including inspect/reset): Global concurrency limits
- CLI reference: CLI: concurrency-limit
If you share:
- the tag name you’re using,
- the output of prefect concurrency-limit inspect "<tag>"
(especially active_slots
),
- one example flow run ID that is “completed but holding the slot”,
…I can help pinpoint exactly which slot is stuck and confirm the best next step. Also, which Prefect version are you on (prefect version
)? If you’re not on the latest 3.x, upgrading can help — there have been fixes around state transitions and slot release.