Aj Floersch
11/01/2023, 7:38 PMprocess
work-pool using the flow.deploy()
method? If not, what is the recommended approach?Marvin
11/01/2023, 7:39 PMflow.deploy()
method is not specifically mentioned in the context of deploying a flow to a process
work-pool. However, the documentation suggests that you can configure your deployment to send flow runs to your work pool. This can be done either through an interactive command line experience or by deploying an existing prefect.yaml
file. In the prefect.yaml
file, you would specify the work pool like so:
yaml
work_pool:
name: my-push-pool
Deploying your flow to the my-push-pool
work pool will ensure that runs that are ready for execution will be submitted immediately, without the need for a worker to poll for them.
For more details, you can refer to the Prefect documentation.Nate
11/01/2023, 7:50 PMflow.serve()
is the current recommended approach for running declarative python deployments as a process, whereas flow.deploy
is for deployments where you need dynamically allocated infraNate
11/01/2023, 7:50 PMprefect.yaml
UXNate
11/01/2023, 7:51 PMAj Floersch
11/01/2023, 8:04 PM.serve()
, however, from what I understand, it doesn't allow for queue or concurrency configurations the way .deploy()
does. Additionally, the way I understand it is the .serve()
has to be run wherever the process is meant to live (i.e. I can't run it locally and have the process start on one my VMs).Nate
11/01/2023, 8:06 PM.serve()
has to be run wherever the process is meant to live (i.e. I can't run it locally and have the process start on one my VMs).
correct, but unless you're doing something atypical, the same would be true for a process worker, i.e.
the flow will run as a process where the process worker lives, so the serve
process sort of takes over the role of the process worker
and this is partly true
> it doesn't allow for queue or concurrency configurations
you can specify a limit
for concurrency with serve, but you're correct that you dont have the queue concurrency there bc there is no inherent work pool / queue with serveAj Floersch
11/01/2023, 8:16 PMlimit
, so I will take a second look. I tried limit
, but that threw an unexpected keyword error. Going the process
work-pool route and using prefect.yaml
along with the CLI makes sense, and I was able to get that to work. I guess I was hoping there might have been a way to explicitly define alongside the flow definition in this scenario the way you can with the .deploy()
method.Nate
11/01/2023, 8:43 PMlimit
is a field on the abstraction that underlies .serve
so you can pass it so the serve
function, but not the Flow.serve
method (to me, it seems like it should be accepted in both places, i can raise internally) so that would look like
from prefect import flow, serve
@flow
def foo():
pass
serve(foo.to_deployment(name="foo deployment"), limit=3)
but anyways, yeah it sounds like for the time being, prefect.yaml
to define a deployment with a process worker might be more what you're looking for, but this
> I was hoping there might have been a way to explicitly define alongside the flow definition in this scenario the way you can with the .deploy()
method
seems like a totally reasonable expectation (and is generally true except for this process worker case), and I anticipate we will make this possible in the future for process worker as well (though don't quote me on that 🙂 )