Balaji Arun
10/15/2025, 5:24 AMMarvin
10/15/2025, 5:24 AMMarvin
10/15/2025, 5:30 AMprefect work-pool inspect "<POOL_NAME>"
prefect work-pool set-concurrency-limit "<POOL_NAME>" 100
prefect work-queue ls
prefect work-queue inspect "<QUEUE_NAME>" --pool "<POOL_NAME>"
prefect work-queue set-concurrency-limit "<QUEUE_NAME>" 100 --pool "<POOL_NAME>"
Docs: CLI: work-pool, CLI: work-queue
2) Work pool or work queues paused
- If paused, nothing will be submitted.
prefect work-pool inspect "<POOL_NAME>"
prefect work-queue inspect "<QUEUE_NAME>" --pool "<POOL_NAME>"
3) No (or the wrong) workers polling your pool/queues
- Make sure you have a Kubernetes worker running against the correct pool/queues.
prefect worker start --pool "<POOL_NAME>" --work-queue "<QUEUE_NAME>"
- If you filter by queue name(s), the worker will only pull from those queues. If you don’t specify queues, it pulls from all queues in the pool.
4) Task-level concurrency limits (by tag)
- If you use concurrency limits on tags, tasks can bottleneck runs and make them wait in Pending/Awaiting states.
prefect concurrency-limit ls
prefect concurrency-limit inspect "<TAG>"
Docs: Concurrency limits
5) Many runs are Scheduled in the future
- If you bulk-created 50k runs via a schedule, most may not be due yet. These will show as Scheduled until their time arrives.
Docs: Run states
6) Kubernetes cluster constraints
- Even if Prefect is trying to submit, your cluster may be out of quota/capacity, preventing Job creation. Check worker logs and your cluster:
kubectl get jobs,pods -n <NAMESPACE>
Quick diagnostics to run now
- Inspect pool and queues for concurrency/paused state:
prefect work-pool inspect "<POOL_NAME>"
prefect work-queue ls
prefect work-queue inspect "<QUEUE_NAME>" --pool "<POOL_NAME>"
- See what runs a queue would pull (sanity check):
prefect work-queue preview "<QUEUE_NAME>" --pool "<POOL_NAME>"
- Confirm a worker is running against the right pool/queues and check its logs (the worker will log if a pool limit is reached or a queue is paused).
- Spot check a pending run:
prefect flow-run ls
prefect flow-run inspect "<FLOW_RUN_ID>"
Docs: CLI: flow-run
A note on “controls”
- In Prefect 3.x, you typically control submission rate/parallelism via:
- Work pool concurrency limit (pool-wide)
- Work queue concurrency limit (per queue)
- Tag-based concurrency limits for tasks
- There isn’t a worker “concurrency” flag; workers respect the pool/queue limits and your cluster capacity.
If you can share:
- Your work pool name and type
- Whether the pool or queues show a concurrency limit or paused state
- How many workers you have running and with which --work-queue filters
- Any worker log messages when it pollsMarvin
10/15/2025, 5:30 AMBalaji Arun
10/15/2025, 5:39 AMMarvin
10/15/2025, 5:39 AMMarvin
10/15/2025, 5:42 AMprefect work-pool inspect "<POOL_NAME>"
- Resume/Pause:
prefect work-pool resume "<POOL_NAME>"
prefect work-pool pause "<POOL_NAME>"
- Pool-wide concurrency limit
- Inspect (shows current limit):
prefect work-pool inspect "<POOL_NAME>"
- Set/Clear:
prefect work-pool set-concurrency-limit "<POOL_NAME>" 200
prefect work-pool clear-concurrency-limit "<POOL_NAME>"
Docs: CLI: work-pool
- Base job template (Kubernetes job spec used for each run)
- Get default template to a file:
prefect work-pool get-default-base-job-template --type kubernetes --file k8s-template.json
- Update the pool to use your edited template:
prefect work-pool update "<POOL_NAME>" --base-job-template ./k8s-template.json
- Fields worth checking that can indirectly throttle/stop Jobs:
- namespace and serviceAccountName (permissions to create Jobs)
- image, imagePullPolicy, imagePullSecrets (pull failures block runs)
- resources.requests/limits on pod containers (too high can starve scheduling)
- nodeSelector, tolerations, affinity (overly strict = unschedulable)
- ttlSecondsAfterFinished, backoffLimit, activeDeadlineSeconds
- `volumes`/`volumeMounts`, env, labels/annotations
Kubernetes guide: Kubernetes worker guide
Work queues (per-queue controls inside the pool)
- Concurrency limit (per queue)
- Inspect:
prefect work-queue ls
prefect work-queue inspect "<QUEUE_NAME>" --pool "<POOL_NAME>"
- Set/Clear:
prefect work-queue set-concurrency-limit "<QUEUE_NAME>" 200 --pool "<POOL_NAME>"
prefect work-queue clear-concurrency-limit "<QUEUE_NAME>" --pool "<POOL_NAME>"
- Paused/resumed
- Pause/Resume:
prefect work-queue pause "<QUEUE_NAME>" --pool "<POOL_NAME>"
prefect work-queue resume "<QUEUE_NAME>" --pool "<POOL_NAME>"
- Priority (affects dispatch order across queues in a pool; higher value = higher priority)
- Create or update queue with priority:
prefect work-queue create "<QUEUE_NAME>" --pool "<POOL_NAME>" --priority 100
- Inspect priority:
prefect work-queue inspect "<QUEUE_NAME>" --pool "<POOL_NAME>"
- Preview which runs are eligible:
prefect work-queue preview "<QUEUE_NAME>" --pool "<POOL_NAME>"
prefect work-pool preview "<POOL_NAME>" --hours 24
Docs: CLI: work-queue, Concepts: Work pools
Other quick checks that affect submission rate
- Worker is polling the right pool/queues:
prefect worker start --pool "<POOL_NAME>" --work-queue "<QUEUE_NAME>"
If you don’t pass --work-queue, the worker will pull from all queues in the pool.
- Many runs “Scheduled” may simply have future scheduled_time and won’t run yet.
- Tag-based task concurrency limits can hold runs in Pending/Awaiting states:
prefect concurrency-limit ls
Docs: Concurrency limits
If you share:
- prefect work-pool inspect "<POOL_NAME>" output
- prefect work-queue inspect "<QUEUE_NAME>" --pool "<POOL_NAME>" output
- Whether your worker is running and its log messages
…I can pinpoint exactly which setting is capping your throughput.