Greetings, Does Prefect Worker support `--match` l...
# ask-community
s
Greetings, Does Prefect Worker support
--match
like Prefect Agent ? If not, what is the alternative ? Reference: Agents with dynamic work queue matching by anticorrelator · Pull Request #7099 · PrefectHQ/prefect
n
hi @staticnotdynamic - typically workers will be "pointed at" a work pool instead of an individual queue (but you can specify specific queues if you need to) you can find the flags with
prefect worker start --help
with workers, you only really need to worry about queues if you need to worry about prioritizing work within a work pool, and just assigning a work pool with
--pool my-pool
is all you need
👍 1
s
Hey @Nate, thanks a bunch for the quick reply. I appreciate it. What I am looking for is literally this: Add toggle for automatic concurrency limit of one run per deployment · Issue #5623 · PrefectHQ/prefect This issue was marked as resolved because *Agents (*but not Workers) can match flows with a specific regex/name
So the question becomes: In Prefect v2, how can I ensure one run per deployment at a time using Prefect Workers ?
Let me know if I am missing anything. We are currently migrating our workflows to Prefect V2 and this might be a potential blocker
n
how can I ensure one run per deployment at a time
concurrency limits! if you're around @Marvin please find docs on work pool level concurrency limits and tldr
m
Here is a brief summary of work pool level concurrency limits in Prefect: Each work pool in Prefect can have optional concurrency limits to restrict the number of concurrent runs of matching flows. For example, if a work pool is set with a concurrency limit of 5, it will allow new work only if fewer than 5 matching runs are currently in a Running or Pending state. To manage concurrency limit using the Prefect CLI, you can use the
set-concurrency-limit
subcommand. This command sets a concurrency limit for the work pool, thus ensuring that your specified concurrency level is not exceeded when new work is polled. For more detailed information, you can refer to the work pools documentation.
s
Ok cool. I have read about Task and Work Pools concurrency levels. What I understood is that for our N deployments, we need N Work Pools to achieve this, is that correct ?
n
not necessarily, you can have N deployments get their config from 1 work pool (which just means you associate a deployment with that work pool at deployment time), which all would be subject to that work pools concurrency if that was desirable
s
Thanks Nate for taking the time to answer. I will reread your words carefully and will try this on our Prefect V2 playground ASAP
n
sorry if I wasn't totally clear 🙂 in general the way I think about workers / work pools is • associate new deployments with a work pool (deployments produce flow runs via schedules, triggers, ad-hoc runs) • work pools have infra level config + concurrency settings that they bestow upon deployments associated with them • workers (assigned a single work pool) pick up flow runs from that pool (optionally a specific queue within that pool) and submit them to the runtime environment its usually 1 work pool per unique infra environment type (like k8s, ECS, etc) if you have N deployments all in 1 infra environment, where at most one among them should be running at a given time, then you could just have 1 work pool that enforces a flow run concurrency of 1, so that if the slot is occupied, subsequent flow runs would wait in
PENDING
until that slot clears
feel free to ask any clarifying questions when you try it out
👍 1
s
sorry if I wasn't totally clear
Oh you weren't, and the docs are super helpful as well
its usually 1 work pool per unique infra environment type (like k8s, ECS, etc)
Great. We do use Kubernetes on one work pool.
if you have N deployments all in 1 infra environment, where at most one among them should be running at a given time
That's our setup indeed. I went in and tried Work pool limits, but our N deployments are mutually exclusive: the function of all produced flows do not intersect
Visual example of our setup. Each square is a deployment. We wish to limit each green deployment/flow run to 1 instance at a time
Hey @Nate I ended up creating work queues manually for each deployment (colored in green) with a concurrency level of 1
n
ah nice! so all same work pool but enforcing concurrency at the work queue level? makes sense to me
s
Yeah