<@ULVA73B9P> when I call "serve" through python......
# ask-marvin
a
@Marvin when I call "serve" through python... that creates a long lasting process that seems to process my jobs. does that mean i don't need a worker?
m
The
serve
command in Prefect is designed to create a deployment for regular execution of tasks, often used for scheduling shell commands or other workflows. It essentially sets up a long-lasting process that can handle scheduled tasks, but it doesn't replace the need for a worker if you are using Prefect's worker-based architecture. Workers in Prefect are lightweight polling services that retrieve and execute scheduled runs from a work pool. They are responsible for provisioning the infrastructure needed to execute flow runs. Different worker types exist, such as process, Kubernetes, Docker, and ECS workers, each suited for different execution environments. In summary, while
serve
can manage scheduled tasks and create long-running processes, if your setup involves worker-based deployments, you would still need workers to handle the execution of those tasks. The
serve
function and workers serve different purposes within the Prefect ecosystem. For more information, you can refer to the following resources: - Deploy with serve - Daemonize worker processes - Learn about workers
n
short answer: yes
.serve is like an easy-mode process worker, it listens for / executes scheduled runs of all the flows you're serving
a
@Nate thanks! I'm trying to get started with Prefect for running some dbt jobs on Snowflake. Would this setup make sense? • One container with the prefect server • One container running "serve" with all my scheduled flows? I've also considered the following: • One container with the prefect server • One container with an ECS worker pool I'm struggling with getting the second option to work, so I'm leaning towards the first option.
n
well here's the main question I'd ask do you need each flow run to run in its own container? • no: just follow this guide and make your serve process the entrypoint of a long lived container • yes: create an ECS work pool (in the UI), run a worker that listens to this pool and submits flow runs as jobs to ECS
you can run the server (or use cloud) however you want, you'll just have to set
PREFECT_API_URL
wherever you're doing your work (on the long-lived container, for example)
a
that guide looks perfect! this should help get me up and running, appreciate you taking your time!
catjam 1