Tom Han
09/30/2025, 5:37 PMMarvin
09/30/2025, 5:37 PMMarvin
09/30/2025, 5:42 PMprefect worker start -p <your-pool> --limit 1
or in code:
from prefect.workers.process import ProcessWorker
ProcessWorker(work_pool_name="<your-pool>", limit=1).start()
- Pool-wide limit (affects all workers in the pool; no worker restart needed):
prefect work-pool set-concurrency-limit <your-pool> 1
prefect work-pool inspect <your-pool>
Recommended starting points
- If the UI was becoming inaccessible, set both the worker limit and the work-pool limit to 1. Verify the Server/UI stays responsive.
- If stable, try 2. On a machine with 4+ CPU cores and plenty of RAM, 2–3 can be okay for IO-bound DB work—monitor CPU, memory, and DB connection usage and only increase if you have headroom.
- Keep the pool limit aligned with your per-worker limit (or slightly lower if you might add more workers later), so you don’t accidentally oversubscribe.
Optional: throttle specific operations
- If certain tasks (e.g., upserts) should never run in parallel across flows, use tag-based concurrency limits:
prefect concurrency-limit create db-writes 1
Then tag those tasks/flows with db-writes. Docs: Global concurrency limits
Docs and references
- Workers concept: Workers
- Work pools and pool concurrency: prefect work-pool CLI
- Process worker API (shows limit): ProcessWorker
If you share your machine specs (CPU cores/RAM) and target throughput, I can suggest a more precise limit.