<@ULVA73B9P> is it possible to deploy a flow to a ...
# ask-marvin
a
@Marvin is it possible to deploy a flow to a
process
work-pool using the
flow.deploy()
method? If not, what is the recommended approach?
m
The
flow.deploy()
method is not specifically mentioned in the context of deploying a flow to a
process
work-pool. However, the documentation suggests that you can configure your deployment to send flow runs to your work pool. This can be done either through an interactive command line experience or by deploying an existing
prefect.yaml
file. In the
prefect.yaml
file, you would specify the work pool like so:
Copy code
yaml
work_pool:
  name: my-push-pool
Deploying your flow to the
my-push-pool
work pool will ensure that runs that are ready for execution will be submitted immediately, without the need for a worker to poll for them. For more details, you can refer to the Prefect documentation.
n
flow.serve()
is the current recommended approach for running declarative python deployments as a process, whereas
flow.deploy
is for deployments where you need dynamically allocated infra
otherwise yeah, like marvin said you can totally create a deployment for a process worker to run with the
prefect.yaml
UX
although actually you wouldnt use a push pool for a process worker, so marvin's a little off base
a
Thank you for clarifying! I have have seen the
.serve()
, however, from what I understand, it doesn't allow for queue or concurrency configurations the way
.deploy()
does. Additionally, the way I understand it is the
.serve()
has to be run wherever the process is meant to live (i.e. I can't run it locally and have the process start on one my VMs).
n
>
.serve()
has to be run wherever the process is meant to live (i.e. I can't run it locally and have the process start on one my VMs). correct, but unless you're doing something atypical, the same would be true for a process worker, i.e. the flow will run as a process where the process worker lives, so the
serve
process sort of takes over the role of the process worker and this is partly true > it doesn't allow for queue or concurrency configurations you can specify a
limit
for concurrency with serve, but you're correct that you dont have the queue concurrency there bc there is no inherent work pool / queue with serve
a
Gotcha - I hadn't seen anything on serve accepting a concurrency
limit
, so I will take a second look. I tried
limit
, but that threw an unexpected keyword error. Going the
process
work-pool route and using
prefect.yaml
along with the CLI makes sense, and I was able to get that to work. I guess I was hoping there might have been a way to explicitly define alongside the flow definition in this scenario the way you can with the
.deploy()
method.
n
ah, so
limit
is a field on the abstraction that underlies
.serve
so you can pass it so the
serve
function, but not the
Flow.serve
method (to me, it seems like it should be accepted in both places, i can raise internally) so that would look like
Copy code
from prefect import flow, serve

@flow
def foo():
   pass

serve(foo.to_deployment(name="foo deployment"), limit=3)
but anyways, yeah it sounds like for the time being,
prefect.yaml
to define a deployment with a process worker might be more what you're looking for, but this > I was hoping there might have been a way to explicitly define alongside the flow definition in this scenario the way you can with the
.deploy()
method seems like a totally reasonable expectation (and is generally true except for this process worker case), and I anticipate we will make this possible in the future for process worker as well (though don't quote me on that 🙂 )