Read the documentation multiple time now still stu...
# prefect-getting-started
h
Read the documentation multiple time now still stuck on one question. I understood the concept of workers and queues now. But for me it's not clear if I even need a worker when I serve a flow in a docker file. As far as I understood the documentation, the serve is creating a long running process waiting for the flow to be triggered. Only using deploy will prepare it's own infrastructure but seems to be more complex in setup. Is this correct? Furthermore would the serve still allow the flow to be run multiple times at the same time? Thanks for any help
@Daryl I didn't read the documents properly. Tested it with a Dockerfile and serve. Working fine with parallel runs. • Documentation says: A deployment created with the Python
flow.serve
method or the
serve
function runs flows in a subprocess on the same machine where the deployment is created. It does not use a work pool or worker. Seems like I can turn of my Agent again :) If working with work pools Docker or K8 seems to be required. I didn't test that, since I'm not sure how to add a docker work pool on self hosted system. And I think I don't really need for for now.
d
@Hen rik I am using docker compose and a server and cli instance and submitting flows from the cli. That seems to be working well for testing but have not gotten up to deploying unattended into proudction yet. Trying to figure out why I can't parallelize synchronous subflows to ramp up ingestion. 8-/