<@ULVA73B9P> i see that my flows are Late. However...
# ask-community
i
@Marvin i see that my flows are Late. However, according to the limit of concurrency - i can still have more flow runniing. So whats happening ?
m
thought for 195.6 seconds
Short answer: “Late” means the run’s scheduled start time passed before any worker actually picked it up. It usually isn’t the schedule itself—something prevented a worker from submitting the run on time. Even if your concurrency limit looks like it has room, another gate or operational issue may be blocking pickup. Quick checklist to find the culprit (Prefect 3.x): 1) Make sure a worker is actually polling the right place - Is at least one worker running and healthy? - Is it polling the same Work Pool (and Work Queue, if used) as your deployment? - Command to verify:
Copy code
prefect work-pool inspect "<your-pool>"
prefect deployment inspect "<flow-name>/<deployment-name>"
- When starting a worker, ensure the pool/queue are correct:
Copy code
prefect worker start --pool "<your-pool>" --work-queue "<your-queue>"
Common mismatch: the deployment targets a specific queue but your worker is polling the pool without that queue (or vice versa). 2) Confirm the Work Pool isn’t paused - If paused, no runs will be picked up and they’ll go Late.
Copy code
prefect work-pool inspect "<your-pool>"
prefect work-pool resume "<your-pool>"
3) Check Work Pool concurrency (it’s pool-wide) - Your pool’s limit might be reached by other deployments/runs even if “this” deployment seems under its own target.
Copy code
prefect work-pool inspect "<your-pool>"
# Look at concurrency_limit and active_runs
- If needed, raise it:
Copy code
prefect work-pool set-concurrency-limit "<your-pool>" <new-limit>
4) Verify global/tag concurrency limits (if you’re using them) - Tag-based concurrency limits can hold work; if the run can’t be started before the scheduled time due to a filled limit, it can go Late.
Copy code
prefect concurrency-limit ls
prefect concurrency-limit inspect "<tag>"
5) Worker health and capacity - If the worker is crash-looping, out of credentials, or can’t reach the API, it won’t pick up runs. - For containerized/k8s/ecs workers: cluster quotas or image-pull delays affect runs after pickup (Pending), but if the worker stops submitting new jobs because the cluster is at quota, you can still see Late runs pile up. 6) Double-check the deployment target - Ensure the deployment is attached to the intended Work Pool/Queue and that its schedule is active:
Copy code
prefect deployment inspect "<flow-name>/<deployment-name>"
What would help me help you faster: - Are you on Prefect Cloud or OSS? - Your worker type (process/Docker/Kubernetes/ECS) and where it runs - Work Pool name + output of:
Copy code
prefect work-pool inspect "<pool>"
prefect deployment inspect "<flow>/<deployment>"
prefect concurrency-limit ls
- A snippet of the worker logs around the time a run went Late Reference - What “Late” indicates and common causes: Global concurrency limits (Prefect 3.x) - Flow engine source (state transitions context): Flow engine code
i
I have a worker running but it seems its not respecting the concurrency. There is rooom to run more processes but somehow its not picking up work. @Marvin should i start a new worker on the same queue ?
m
thought for 154.9 seconds
Short answer: You can start another worker on the same work queue, and Prefect will coordinate them. But if a concurrency limit is the bottleneck, another worker won’t increase throughput. Let’s check the limits first. Common reasons a worker looks idle despite “room to run”: - Work pool or work queue concurrency limits are set too low - The worker’s own
--limit
is too low - A global concurrency limit (flow/task-level) is throttling runs - The worker isn’t polling the right queue(s) or the runs aren’t yet due - Infrastructure startup is slow so it “looks idle” between submissions Quick checks 1) Pool/queue concurrency - In the UI: Work Pools → your pool → check the pool’s Concurrency Limit; then Queues → check each queue’s Concurrency Limit. - Or CLI:
Copy code
prefect work-pool inspect <POOL_NAME>
prefect work-queue ls
prefect work-queue inspect <QUEUE_ID>
If needed:
Copy code
prefect work-pool set-concurrency-limit <POOL_NAME> <N>
prefect work-queue set-concurrency-limit <QUEUE_NAME> <N>
2) Worker limit - How did you start the worker? If you used
prefect worker start
, confirm whether you set
--limit
(max concurrent flow runs this worker will start). If that’s low, the worker will stop submitting more runs even if the pool/queue could allow more. 3) Global concurrency limits - If you’re using concurrency keys for flows or tasks, those can cap throughput regardless of workers. - Docs: Global concurrency limits 4) Queue targeting and readiness - Make sure the worker is polling the correct queue(s) with
--work-queue
if you’re using named queues. - Preview the queue to see if there are actually due runs:
Copy code
prefect work-queue preview -q <QUEUE_NAME>
- If runs are scheduled in the future, the worker won’t pick them up yet. 5) Logs - Check the worker logs for messages about “reached concurrency limit” or not finding eligible runs. Should you start another worker on the same queue? - Yes, it’s safe and often helpful for scale and redundancy. Multiple workers can poll the same pool/queue. This helps if your current bottleneck is the worker’s
--limit
or CPU/network constraints on that machine. - It won’t help if the pool/queue/global concurrency limits are already the constraint—raise those first. If you share a bit more detail, I can be more specific: - Prefect version (assuming 3.x) - Worker type (process, docker, kubernetes) - The exact command you used to start the worker (including flags) - Pool/queue names and their concurrency limits - Whether you’re using any global concurrency limits Related docs - Work pools and queues: Work pools and queues - Global concurrency: Global concurrency limits
i
@Marvin i haven't set any --limit flags on my worker
m
thought for 168.7 seconds
Got it — if you didn’t set
--limit
, the worker itself isn’t imposing a per-worker cap. That usually means something else is throttling pickups. Quick next checks: - Pool/queue concurrency limits - UI: Work Pools → your pool → Concurrency Limit; Queues → Concurrency Limit and Paused/Active status - Or CLI:
Copy code
prefect work-pool inspect <POOL_NAME>
    prefect work-queue ls
    prefect work-queue inspect <QUEUE_ID_OR_NAME>
If needed:
Copy code
prefect work-pool set-concurrency-limit <POOL_NAME> <N>
    prefect work-queue set-concurrency-limit <QUEUE_NAME> <N>
- Are there eligible runs right now?
Copy code
prefect work-queue preview -q <QUEUE_NAME>
If runs are scheduled in the future or not eligible, the worker won’t pick them up. - Any global concurrency limits?
Copy code
prefect concurrency-limit ls
These will cap throughput regardless of workers. Docs: Global concurrency limits - Correct queue targeting and not paused - Ensure the worker is polling the right queue(s) and the queue/pool aren’t paused. - Worker logs - Look for “reached concurrency limit” or “no eligible runs found”. Should you start another worker on the same queue? - Yes, it’s safe and a good diagnostic. If a second worker immediately increases throughput, the bottleneck was on the first machine (CPU/IO) or an implicit process-level constraint. If nothing changes, a pool/queue/global limit is likely the constraint. If you can share: - The exact worker start command - Pool/queue names and their concurrency limits - Worker type (process, docker, kubernetes) - A snippet of worker logs around when it looks idle …I can pinpoint the issue faster. Reference docs: - Work pools/queues: Work pools and queues - Global concurrency: Global concurrency limits