oleander-whirlwind
10/06/2025, 9:13 AMMarvin
10/06/2025, 9:13 AMMarvin
10/06/2025, 9:18 AMpause_on_shutdown=False).
- Flow.deploy(...) or flow.from_source(...).deploy(...):
- Production-style deployments meant to run on workers attached to work pools (process, Docker, Kubernetes, etc.).
- Code executes wherever your workers launch jobs — not in your local process.
- Lets you define how your code is retrieved (e.g., Git) and how the environment is provisioned (e.g., Docker image with dependencies).
- Schedules continue regardless of your local machine; runs get picked up by workers.
Where does the code run?
- Prefect’s server/Cloud never executes your code. It only orchestrates.
- With serve: code runs on the machine where you called .serve() (the “client” machine).
- With deploy: code runs on your infrastructure where your workers execute (VMs, containers, k8s pods, etc.).
What happens under the hood with .serve?
- Registers a “runner” style deployment for each served flow (you’ll see them in the UI if connected to an API).
- Starts a local runner loop that:
- Polls the Prefect API for new/scheduled runs for those deployments
- Executes them in-process on your machine using your current Python environment
- Reports states, logs, and artifacts back to the API
- On shutdown, by default pauses schedules so Prefect won’t enqueue runs that no one will pick up (configurable with pause_on_shutdown=False).
- There’s also a module-level prefect.serve(flow1, flow2, ...) helper to serve multiple flows in one process.
How are code and Python packages “deployed”?
- They are not deployed to the Prefect server/Cloud. You provide an execution environment that has your code and dependencies:
- With serve: your current environment is used as-is. Your code and packages must already be installed on the machine running .serve().
- With deploy: you define how workers will get your code and dependencies:
- From source: point to a repo and entrypoint; workers clone/pull the code at runtime (e.g., git clone). You can optionally run a pip install step.
- Container image: build a Docker image with your code and requirements; workers run that image. deploy(..., image=..., build=True, push=True) or use prefect deploy with a prefect.yaml.
Examples
Serve locally (runs on this machine)
from prefect import flow
@flow
def my_flow():
print("Hello from serve")
if __name__ == "__main__":
# Run on a schedule, executed locally on this machine
my_flow.serve(
name="local-dev",
cron="0 * * * *", # run hourly
pause_on_shutdown=True, # default; pause schedules on exit
)
Deploy to a work pool (runs on workers)
from prefect.flows import Flow
# Point to your repo (workers will pull this code) and deploy to a pool
Flow.from_source(
"<https://github.com/org/repo>",
entrypoint="flows/etl.py:my_flow",
).deploy(
name="prod",
work_pool_name="k8s-pool",
image="<http://ghcr.io/org/etl:{{|ghcr.io/org/etl:{{> git_sha }}", # build/push if desired
cron="0 * * * *",
build=True,
push=True,
)
CLI alternative (validated)
prefect deploy ./flows/etl.py:my_flow --name prod --pool k8s-pool --cron "0 * * * *"
Notes and gotchas
- There is no prefect serve CLI command; serve is a Python API (Flow.serve(...) or prefect.serve(...)).Marvin
10/06/2025, 9:18 AMDeployment.build_from_flow() and prefect deployment build no longer exist in 3.x. Use flow.from_source(...).deploy(...) or prefect deploy.
- Serving or deploying requires a Prefect API (Cloud or local Server) to register deployments and orchestrate runs. If you want to run truly offline, call your flows directly (e.g., my_flow()) instead of using serve/deploy.
Docs and references
- Deployments and prefect.yaml: prefect.yaml guide
- Running in local processes: Run flows in local processes
- Serving flows (container example): Serve flows with Docker
- Blog intro to serve: Schedule your code quickly with flow.serve
If you share where you want your flows to run (local host, Docker, Kubernetes, etc.) I can suggest the cleanest pattern and a minimal working setup for your environment.