is there a no way to deploy a flow without attachi...
# ask-community
v
is there a no way to deploy a flow without attaching it to a worker pool? so that I can keep adding more worker pool to provide more concurrent executions easily if a deployment is not associated with a particular pool. right now I added another worker pool but tasks are still stuck waiting in first pool. any help is appreciated!
n
is there a no way to deploy a flow without attaching it to a worker pool?
.serve
is the only exception where a deployment doesn't need a work pool
so that I can keep adding more worker pool to provide more concurrent executions easily if a deployment is not associated with a particular pool.
im not sure i understand the need here
v
can one deployment/flow be executed in 2 worker pools based on whichever has concurrency limit available? I tried deploying a flow without default worker pool but didnt work. I also tried creating multiple worker queue under one pool but I can set concurrency limit on worker queue i think so I am trying that.
so instead of one deployment that can run on multple pools, I think it will run against one worker pool and under that pool I can add work queues by creating new servers to support more load
n
just to clarify • work pools are a server-side concept that correspond to a type of infra your flow can run on • workers are a client-side daemon that listens for flow runs scheduled to the work pool and submits them according to the base job template in the work pool workers are the thing you start (
prefect worker start --pool some-pool
) somewhere in your environment
I tried deploying a flow without default worker pool but didnt work.
deployments need a work pool, that is, deployments need a description of how the the flow run should execute (on kubernetes, as a subprocess etc) you can set concurrency in one or many places • on the deployment itself: only N flow runs of this deployment can run at once • on the work pool: only N flow runs scheduled to this work pool can run at once • on a work queue within a work pool
🙌 1
v
so say I have two deployments:
image-creator
and
video-creator
. now I create a pool called
remote-pool
and both deployments point to same default
remote-pool
now since image creation is faster, I can set deployment concurrency to say 8 for
image-creator
and no concurrency in
video-creator.
now if I want to increate more concurrency slots under it as more people make videos, can I just add more workers here. each worker is a new server/client where task will execute. based on this client resources, how to set concurrency of a worker? will this work?
prefect worker start --pool "remote-pool" --work-queue "queue-1" --name "worker-1" --limit 6
remote-pool already exists, I want to increase capacity and new server has 6 cores. In ui, where can I see that this limit 6 is set properly?
or you suggest I run two pools for two deployments?
final goal is to limit image-creation available concurrency to 8 and left over for video creation and as we increase more workers, all new capacity goes to video creation