<@ULVA73B9P> To run multiple flows concurrently on...
# ask-marvin
f
@Marvin To run multiple flows concurrently on bare-metal infrastructure, do I need multiple workers or is 1 worker in 1 pool sufficient?
m
thought for 151.6 seconds
Short answer: one worker is enough. Details: - A single Prefect 3.x worker can run multiple flow runs concurrently on a bare‑metal machine. Each run is executed in its own subprocess (for the Process worker). - Control per‑worker concurrency with the worker’s concurrency limit. Start your worker with a higher limit to allow multiple simultaneous runs, e.g.:
Copy code
prefect worker start -p my-baremetal-pool --limit 4
- The work pool’s concurrency limit caps total concurrency across all workers in that pool. Effective concurrency = min(pool limit, sum of worker limits). - You’d only need multiple workers if you want isolation (different environments), different queues, or to spread load across hosts. Not required for simple local parallelism. If you’re on a single box and just want multiple flows at once: 1 pool + 1 worker with
--limit > 1
will do the job—assuming your machine can keep up.
f
where is that limit setting set?
@Marvin Thank you! If limit is not specified, what is the default?
m
thought for 47.2 seconds
By default, a worker runs 1 flow run at a time. If you want concurrency, set it explicitly when starting the worker, e.g.:
Copy code
prefect worker start -p my-pool --limit 4
Pool-level concurrency is unlimited unless you set it, so the effective cap is: - per‑worker:
--limit
(default 1) - per‑pool: pool concurrency limit (default unlimited) - actual concurrency = min(pool limit, sum of worker limits, your machine’s patience)
n
hrm that's not right
i think by default there's no subprocess concurrency limits
let me make sure that's the right setting rq
ah i was right the first time default behavior without an explicit limit is unlimited for process worker, not the runner's limit (which would apply if you were using .serve)
f
@Nate Thank you so much, that makes sense. Just starting to scale up a lot so wanted to confirm.
n
sure thing. if you're scaling up a lot on static infra and doing a lot of volume I'll put this (repo) pattern on your radar • tasks are less chatty with the API than flows (client side orchestration) -> less overhead • when you serve tasks, you start a websocket client so scheduled runs are distributed real-time (no polling) and they work like proper consumer groups (less risk of contention at high scale) a single worker on a vertically scaled machine is totally valid, just letting you know about this option
notably, tasks are free to call other tasks or flows (unlike prefect 2.x) so you can
.serve
tasks that call other flows etc