Ishan Anilbhai Koradiya
09/17/2025, 4:59 AMMarvin
09/17/2025, 4:59 AMMarvin
09/17/2025, 5:02 AMprefect work-pool inspect "<your-pool>"
prefect deployment inspect "<flow-name>/<deployment-name>"
- When starting a worker, ensure the pool/queue are correct:
prefect worker start --pool "<your-pool>" --work-queue "<your-queue>"
Common mismatch: the deployment targets a specific queue but your worker is polling the pool without that queue (or vice versa).
2) Confirm the Work Pool isn’t paused
- If paused, no runs will be picked up and they’ll go Late.
prefect work-pool inspect "<your-pool>"
prefect work-pool resume "<your-pool>"
3) Check Work Pool concurrency (it’s pool-wide)
- Your pool’s limit might be reached by other deployments/runs even if “this” deployment seems under its own target.
prefect work-pool inspect "<your-pool>"
# Look at concurrency_limit and active_runs
- If needed, raise it:
prefect work-pool set-concurrency-limit "<your-pool>" <new-limit>
4) Verify global/tag concurrency limits (if you’re using them)
- Tag-based concurrency limits can hold work; if the run can’t be started before the scheduled time due to a filled limit, it can go Late.
prefect concurrency-limit ls
prefect concurrency-limit inspect "<tag>"
5) Worker health and capacity
- If the worker is crash-looping, out of credentials, or can’t reach the API, it won’t pick up runs.
- For containerized/k8s/ecs workers: cluster quotas or image-pull delays affect runs after pickup (Pending), but if the worker stops submitting new jobs because the cluster is at quota, you can still see Late runs pile up.
6) Double-check the deployment target
- Ensure the deployment is attached to the intended Work Pool/Queue and that its schedule is active:
prefect deployment inspect "<flow-name>/<deployment-name>"
What would help me help you faster:
- Are you on Prefect Cloud or OSS?
- Your worker type (process/Docker/Kubernetes/ECS) and where it runs
- Work Pool name + output of:
prefect work-pool inspect "<pool>"
prefect deployment inspect "<flow>/<deployment>"
prefect concurrency-limit ls
- A snippet of the worker logs around the time a run went Late
Reference
- What “Late” indicates and common causes: Global concurrency limits (Prefect 3.x)
- Flow engine source (state transitions context): Flow engine codeIshan Anilbhai Koradiya
09/17/2025, 9:03 AMMarvin
09/17/2025, 9:03 AMMarvin
09/17/2025, 9:05 AM--limit
is too low
- A global concurrency limit (flow/task-level) is throttling runs
- The worker isn’t polling the right queue(s) or the runs aren’t yet due
- Infrastructure startup is slow so it “looks idle” between submissions
Quick checks
1) Pool/queue concurrency
- In the UI: Work Pools → your pool → check the pool’s Concurrency Limit; then Queues → check each queue’s Concurrency Limit.
- Or CLI:
prefect work-pool inspect <POOL_NAME>
prefect work-queue ls
prefect work-queue inspect <QUEUE_ID>
If needed:
prefect work-pool set-concurrency-limit <POOL_NAME> <N>
prefect work-queue set-concurrency-limit <QUEUE_NAME> <N>
2) Worker limit
- How did you start the worker? If you used prefect worker start
, confirm whether you set --limit
(max concurrent flow runs this worker will start). If that’s low, the worker will stop submitting more runs even if the pool/queue could allow more.
3) Global concurrency limits
- If you’re using concurrency keys for flows or tasks, those can cap throughput regardless of workers.
- Docs: Global concurrency limits
4) Queue targeting and readiness
- Make sure the worker is polling the correct queue(s) with --work-queue
if you’re using named queues.
- Preview the queue to see if there are actually due runs:
prefect work-queue preview -q <QUEUE_NAME>
- If runs are scheduled in the future, the worker won’t pick them up yet.
5) Logs
- Check the worker logs for messages about “reached concurrency limit” or not finding eligible runs.
Should you start another worker on the same queue?
- Yes, it’s safe and often helpful for scale and redundancy. Multiple workers can poll the same pool/queue. This helps if your current bottleneck is the worker’s --limit
or CPU/network constraints on that machine.
- It won’t help if the pool/queue/global concurrency limits are already the constraint—raise those first.
If you share a bit more detail, I can be more specific:
- Prefect version (assuming 3.x)
- Worker type (process, docker, kubernetes)
- The exact command you used to start the worker (including flags)
- Pool/queue names and their concurrency limits
- Whether you’re using any global concurrency limits
Related docs
- Work pools and queues: Work pools and queues
- Global concurrency: Global concurrency limitsIshan Anilbhai Koradiya
09/17/2025, 9:26 AMMarvin
09/17/2025, 9:26 AMMarvin
09/17/2025, 9:29 AM--limit
, the worker itself isn’t imposing a per-worker cap. That usually means something else is throttling pickups.
Quick next checks:
- Pool/queue concurrency limits
- UI: Work Pools → your pool → Concurrency Limit; Queues → Concurrency Limit and Paused/Active status
- Or CLI:
prefect work-pool inspect <POOL_NAME>
prefect work-queue ls
prefect work-queue inspect <QUEUE_ID_OR_NAME>
If needed:
prefect work-pool set-concurrency-limit <POOL_NAME> <N>
prefect work-queue set-concurrency-limit <QUEUE_NAME> <N>
- Are there eligible runs right now?
prefect work-queue preview -q <QUEUE_NAME>
If runs are scheduled in the future or not eligible, the worker won’t pick them up.
- Any global concurrency limits?
prefect concurrency-limit ls
These will cap throughput regardless of workers. Docs: Global concurrency limits
- Correct queue targeting and not paused
- Ensure the worker is polling the right queue(s) and the queue/pool aren’t paused.
- Worker logs
- Look for “reached concurrency limit” or “no eligible runs found”.
Should you start another worker on the same queue?
- Yes, it’s safe and a good diagnostic. If a second worker immediately increases throughput, the bottleneck was on the first machine (CPU/IO) or an implicit process-level constraint. If nothing changes, a pool/queue/global limit is likely the constraint.
If you can share:
- The exact worker start command
- Pool/queue names and their concurrency limits
- Worker type (process, docker, kubernetes)
- A snippet of worker logs around when it looks idle
…I can pinpoint the issue faster.
Reference docs:
- Work pools/queues: Work pools and queues
- Global concurrency: Global concurrency limits