Parvez Mohammed
08/28/2025, 8:02 AMMarvin
08/28/2025, 8:02 AMMarvin
08/28/2025, 8:03 AMParvez Mohammed
08/28/2025, 8:05 AMMarvin
08/28/2025, 8:05 AMMarvin
08/28/2025, 8:05 AMParvez Mohammed
08/28/2025, 8:05 AMMarvin
08/28/2025, 8:05 AMMarvin
08/28/2025, 8:06 AMPREFECT_API_URL
(which must include /api
).
Key details:
- Default server port: 4200
- PREFECT_API_URL
must include /api
(e.g., `http://your-server-host:4200/api`)
- Create a work pool (e.g., a “process” pool) and have workers attach to it
Below are two compose files you can use.
1) On the server machine (exposes UI and API)
Create docker-compose.server.yml:
services:
prefect-server:
image: prefecthq/prefect:3-latest
container_name: prefect-server
command: >
bash -lc "prefect server start --host 0.0.0.0 --port 4200 --ui"
ports:
- "4200:4200"
environment:
# Persist Prefect data in the mounted volume (includes the SQLite DB by default)
PREFECT_LOGGING_LEVEL: INFO
PREFECT_UI_ENABLED: "true"
volumes:
- prefect_data:/root/.prefect
restart: unless-stopped
volumes:
prefect_data:
Start it:
docker compose -f docker-compose.server.yml up -d
Then create a “process” work pool (one-time). You can do this via the UI or CLI. Example via CLI inside the server container:
docker exec -it prefect-server bash -lc 'prefect work-pool create "process-pool" --type process || true'
2) On each worker machine (one compose per machine)
Create docker-compose.worker.yml (re-use on each worker host by just changing the name and server host):
services:
prefect-worker:
image: prefecthq/prefect:3-latest
container_name: prefect-worker
environment:
# Point to your server's API; note the required /api suffix
PREFECT_API_URL: "<http://YOUR_SERVER_HOST_OR_IP:4200/api>"
PREFECT_LOGGING_LEVEL: INFO
command: >
bash -lc "prefect worker start
--pool process-pool
--name $(hostname)-worker"
restart: unless-stopped
Start it on each worker host:
docker compose -f docker-compose.worker.yml up -d
Notes and tips:
- Ensure port 4200/tcp is reachable from worker machines to the server host. If you cannot open the port, you can use an SSH local tunnel from each worker machine:
- SSH tunnel:
ssh -N -L 14200:localhost:4200 user@YOUR_SERVER_HOST
- Then set PREFECT_API_URL=<http://localhost:14200/api>
in the worker’s environment.
- The work pool in the example is of type “process” so no extra dependencies are required. If you later choose a Docker/Kubernetes/ECS pool, workers will need the corresponding runtime and permissions, and you’ll point workers at that pool instead.
- You can verify options with:
- prefect server start --help
- prefect work-pool create --help
- prefect worker start --help
Docs:
- Server CLI (self-hosted)
- Settings reference (see PREFECT_API_URL
)Marvin
08/28/2025, 8:07 AMprefect server start --host 0.0.0.0 --port 4200
- UI: http://server-host:4200
- API: http://server-host:4200/api
You can restrict to localhost if you’ll use SSH tunnels only:
prefect server start --host 127.0.0.1 --port 4200
2) Create a work pool (on the server or anywhere with access to the API)
- “process” is a good default for running flows directly on the worker machine:
prefect work-pool create my-pool --type process
3) On each worker machine: connect to the Server API
Option A — direct network access:
- If the worker can reach the server directly, set:
export PREFECT_API_URL=<http://server-host:4200/api>
Option B — SSH tunnel (recommended if the API is not exposed)
- Create a local SSH forward from the worker to the server:
ssh -N -L 14200:127.0.0.1:4200 user@server-host
- Then point Prefect on the worker to the forwarded port:
export PREFECT_API_URL=<http://127.0.0.1:14200/api>
Tips for reliability:
- Keep tunnels alive:
ssh -N -L 14200:127.0.0.1:4200 user@server-host -o ExitOnForwardFailure=yes -o ServerAliveInterval=30 -o ServerAliveCountMax=3
- Consider autossh for auto-reconnect:
autossh -M 0 -N -L 14200:127.0.0.1:4200 user@server-host
4) Start a worker on each machine
- Ensure Prefect is installed and your flow’s dependencies are available on each worker machine.
- Start the worker and point it at your work pool:
prefect worker start -p my-pool -n worker-1
Repeat on other machines with different names (e.g., worker-2, worker-3, …).
5) Deploy your flows to the pool
Use either the CLI from your project:
prefect deploy ./flows/etl.py:my_flow --name etl --pool my-pool
Or the Python API (no deprecated 2.x calls):
from prefect import flow
@flow
def my_flow():
...
# Deploy from a Git source so workers can pull code
my_flow.from_source(
"<https://github.com/your-org/your-repo>",
entrypoint="flows/etl.py:my_flow",
).deploy(
name="etl",
work_pool_name="my-pool",
)
Notes:
- Make sure workers can access the code at runtime (e.g., via Git source, a remote filesystem, or a Docker/Kubernetes work pool if you prefer containerized execution).
- For “process” pools, dependencies must be installed on the worker machine.
6) Verify everything
- In the UI (http://server-host:4200), you should see your work pool and live workers.
- Trigger a flow run from the UI or via deployment schedules to confirm workers pick up runs.
Security and networking tips
- Prefect Server has no built-in auth; avoid exposing it publicly. Prefer SSH tunneling or a VPN.
- If you must expose the API, use a reverse proxy with TLS and network access controls.
- If you bind the server only to localhost (127.0.0.1), workers must use SSH tunneling.
Would you like a systemd unit template for the worker and/or the SSH tunnel to keep them running on boot?
Useful docs:
- Prefect docs: https://docs.prefect.io
- Settings (including PREFECT_API_URL): Prefect Settings
- Deployments overview: Deploying flows
- Work pools and workers: Work pools and workersMarvin
08/28/2025, 8:11 AMdocker compose up -d
.
version: "3.9"
services:
postgres:
image: postgres:15
environment:
POSTGRES_USER: prefect
POSTGRES_PASSWORD: prefect
POSTGRES_DB: prefect
volumes:
- pg_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U prefect -d prefect"]
interval: 5s
timeout: 5s
retries: 10
restart: unless-stopped
prefect-server:
# Pick a specific Prefect 3.x tag (example shown)
# See Prefect Docker tags; prefer pinning exact versions, e.g. 3.1.0-python3.11
image: prefecthq/prefect:3.1.0-python3.11
depends_on:
postgres:
condition: service_healthy
environment:
# Point Prefect Server at Postgres
PREFECT_API_DATABASE_CONNECTION_URL: <postgresql+asyncpg://prefect:prefect@postgres:5432/prefect>
# Listen on all interfaces; expose port 4200
PREFECT_SERVER_API_HOST: 0.0.0.0
PREFECT_SERVER_API_PORT: 4200
# Enable built-in scheduler/services
PREFECT_API_SERVICES_SCHEDULER_ENABLED: "true"
# Optional: enable the UI (default is true)
PREFECT_UI_ENABLED: "true"
# Optional: if you access the UI via a hostname, set these to match your external URL
# PREFECT_UI_URL: "<http://your-hostname:4200>"
# PREFECT_UI_API_URL: "<http://your-hostname:4200/api>"
ports:
- "4200:4200"
command: >
bash -lc "prefect server start --host 0.0.0.0 --port 4200"
restart: unless-stopped
volumes:
pg_data:
After it’s up, the UI should be at http://SERVER_HOST_OR_IP:4200 and the API at http://SERVER_HOST_OR_IP:4200/api
2) Create a work pool (run once, anywhere that can reach the API)
If you run this from your laptop, set PREFECT_API_URL
so the CLI talks to your server.
# If running locally and your server is reachable
export PREFECT_API_URL=<http://SERVER_HOST_OR_IP:4200/api>
# Create a process-type work pool
prefect work-pool create "my-process-pool" --type process
3) Worker machines: docker-compose.worker.yml
Copy this file to each worker machine. Each will run a worker that polls the pool above. Replace SERVER_HOST_OR_IP and optionally give each worker a unique name.
Direct HTTP to the server (recommended if you can open port 4200 on the server):
version: "3.9"
services:
worker:
image: prefecthq/prefect:3.1.0-python3.11
environment:
# Point this worker at your Prefect Server API
PREFECT_API_URL: <http://SERVER_HOST_OR_IP:4200/api>
command: >
bash -lc "prefect worker start
--pool my-process-pool
--name worker-$(hostname)
--work-queue default"
restart: unless-stopped
SSH-tunneled option (if you can’t expose 4200):
- On each worker machine, open an SSH tunnel to the server:
ssh -N -L 4200:localhost:4200 user@SERVER_HOST_OR_IP
- Then use this compose file so the container can hit the host’s forwarded port via host.docker.internal:
```
version: "3.9"
services:
worker:
image: prefecthq/prefect:3.1.0-python3.11
# Allow container to reach the host loopback via host.docker.internal
extra_hosts:Marvin
08/28/2025, 8:11 AMdocker compose up -d
). Each worker will register itself and pull work from the same pool.
- Pick a specific Prefect image tag to avoid surprises, e.g., prefecthq/prefect:3.x.y-python3.11
.
- For docker-type or kubernetes-type work pools, create the appropriate pool (--type docker
or --type kubernetes
) and ensure the worker has the necessary permissions and optional dependencies installed.
- Self-hosted Prefect Server (OSS) does not require API keys by default. If you add a reverse proxy/TLS/auth, update PREFECT_API_URL
accordingly.
If you share your exact environment (IPs/hostnames, whether you’ll expose 4200 or require tunneling, and which pool type you want), I can tailor these files precisely for you.