Tom Han
10/09/2025, 9:12 PM+ Exception Group Traceback (most recent call last):
...
| raise RemoteProtocolError(msg)
| httpcore.RemoteProtocolError: Server disconnected without sending a response.
|
| The above exception was the direct cause of the following exception:
|
| Traceback (most recent call last):
| File "/opt/venv/lib/python3.12/site-packages/prefect/runner/runner.py", line 982, in _cancel_run
| flow_run = await self._client.read_flow_run(flow_run)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "/opt/venv/lib/python3.12/site-packages/prefect/client/orchestration/_flow_runs/client.py", line 716, in read_flow_run
| response = await self.request(
| ^^^^^^^^^^^^^^^^^^^
| File "/opt/venv/lib/python3.12/site-packages/prefect/client/orchestration/base.py", line 53, in request
| return await self._client.send(request)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "/opt/venv/lib/python3.12/site-packages/prefect/client/base.py", line 348, in send
| response = await self._send_with_retry(
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "/opt/venv/lib/python3.12/site-packages/prefect/client/base.py", line 268, in _send_with_retry
| response = await send(request, *send_args, **send_kwargs)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "/opt/venv/lib/python3.12/site-packages/httpx/_client.py", line 1629, in send
| response = await self._send_handling_auth(
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "/opt/venv/lib/python3.12/site-packages/httpx/_client.py", line 1657, in _send_handling_auth
| response = await self._send_handling_redirects(
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "/opt/venv/lib/python3.12/site-packages/httpx/_client.py", line 1694, in _send_handling_redirects
| response = await self._send_single_request(request)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "/opt/venv/lib/python3.12/site-packages/httpx/_client.py", line 1730, in _send_single_request
| response = await transport.handle_async_request(request)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "/opt/venv/lib/python3.12/site-packages/httpx/_transports/default.py", line 393, in handle_async_request
| with map_httpcore_exceptions():
| File "/usr/lib/python3.12/contextlib.py", line 158, in __exit__
| self.gen.throw(value)
| File "/opt/venv/lib/python3.12/site-packages/httpx/_transports/default.py", line 118, in map_httpcore_exceptions
| raise mapped_exc(message) from exc
| httpx.RemoteProtocolError: Server disconnected without sending a response.
+------------------------------------
An exception occurred.
When I spawn a lot of workers at the same time, I think it clogs up my self hosted prefect server, and that leads to my pods failing. How should I alter my self hosted server to prevent this? I am using a mac mini.Marvin
10/09/2025, 9:12 PMMarvin
10/09/2025, 9:17 PM# Database connectivity
PREFECT_SERVER_DATABASE_CONNECTION_URL=<postgresql+asyncpg://prefect:prefect@postgres:5432/prefect>
PREFECT_SERVER_DATABASE_TIMEOUT=30
PREFECT_SERVER_DATABASE_CONNECTION_TIMEOUT=10
# SQLAlchemy pool (increase for bursts)
PREFECT_SERVER_DATABASE_SQLALCHEMY_POOL_SIZE=20
PREFECT_SERVER_DATABASE_SQLALCHEMY_MAX_OVERFLOW=40
PREFECT_SERVER_DATABASE_SQLALCHEMY_POOL_TIMEOUT=60
PREFECT_SERVER_DATABASE_SQLALCHEMY_POOL_RECYCLE=1800
- Make sure your Postgres has sufficient max_connections for this.
- Increase API keep-alive timeout
- Prevents premature connection drops during bursts:
PREFECT_SERVER_API_KEEP_ALIVE_TIMEOUT=75
- Or via CLI: prefect server start --keep-alive-timeout 75
- OS limits: raise file descriptors on the Mac mini
- A burst of connections can hit ulimit defaults. Raise to something like 4096–65536:
sudo launchctl limit maxfiles 65536 65536
ulimit -n 65536
- If you run a reverse proxy (nginx/traefik)
- Increase keepalive/read timeouts, and set a small keepalive connection pool to the upstream API. For nginx:
keepalive_timeout 75s;
proxy_read_timeout 75s;
proxy_connect_timeout 10s;
proxy_send_timeout 75s;
keepalive 64;
Worker/client-side smoothing
- Stagger worker starts to avoid a thundering herd
- If you’re launching many pods at once, add a small random sleep before `prefect worker start`:
sh -c "sleep $((RANDOM % 20 + 1)); prefect worker start -p <your-pool>"
- Reduce poll frequency and spread it out
- Workers poll the API every query_seconds and send heartbeats every heartbeat_seconds. Bump these slightly (and/or vary them per worker) to reduce synchronized spikes:
PREFECT_WORKER_QUERY_SECONDS=15
PREFECT_WORKER_HEARTBEAT_SECONDS=30
PREFECT_WORKER_PREFETCH_SECONDS=10
- Increase client retry tolerance on transient failures
- The Prefect client already retries, but you can make it more forgiving during bursts:
PREFECT_CLIENT_MAX_RETRIES=10
PREFECT_CLIENT_RETRY_JITTER_FACTOR=0.5
- Optional: reduce noisy API logging during scale-ups
- If you’re sending a lot of logs to the API, consider turning it off temporarily for load tests to see if it’s contributing to pressure.
Quick “known-good” Compose-style sketch
- If you’re using Docker/Compose, here’s an example of the knobs in one place:
```
services:
prefect-server:
image: prefecthq/prefect:3-latest
environment:
PREFECT_SERVER_API_KEEP_ALIVE_TIMEOUT: "75"
PREFECT_SERVER_DATABASE_CONNECTION_URL: "postgresql+asyncpg://prefect:prefect@postgres:5432/prefect"
PREFECT_SERVER_DATABASE_SQLALCHEMY_POOL_SIZE: "20"
PREFECT_SERVER_DATABASE_SQLALCHEMY_MAX_OVERFLOW: "40"
PREFECT_SERVER_DATABASE_SQLALCHEMY_POOL_TIMEOUT: "60"
command: >Marvin
10/09/2025, 9:17 PMTom Han
10/09/2025, 9:19 PMMarvin
10/09/2025, 9:19 PMMarvin
10/09/2025, 9:20 PMdocker run --name prefect-postgres \
-e POSTGRES_PASSWORD=prefect \
-e POSTGRES_USER=prefect \
-e POSTGRES_DB=prefect \
-p 5432:5432 -d postgres:15
- Point Prefect Server at Postgres and increase pool sizes/timeouts:
export PREFECT_SERVER_DATABASE_CONNECTION_URL="<postgresql+asyncpg://prefect:prefect@localhost:5432/prefect>"
export PREFECT_SERVER_DATABASE_SQLALCHEMY_POOL_SIZE=20
export PREFECT_SERVER_DATABASE_SQLALCHEMY_MAX_OVERFLOW=40
export PREFECT_SERVER_DATABASE_SQLALCHEMY_POOL_TIMEOUT=60
export PREFECT_SERVER_DATABASE_TIMEOUT=30
export PREFECT_SERVER_DATABASE_CONNECTION_TIMEOUT=10
export PREFECT_SERVER_API_KEEP_ALIVE_TIMEOUT=75
prefect server start --ui
2) Smooth out worker load on the API
- Stagger worker startup so they don’t all connect/poll at once:
sh -c "sleep $((RANDOM % 20 + 1)); prefect worker start -p <your-pool>"
- Reduce poll frequency and spread heartbeats a bit:
export PREFECT_WORKER_QUERY_SECONDS=15
export PREFECT_WORKER_HEARTBEAT_SECONDS=30
export PREFECT_WORKER_PREFETCH_SECONDS=10
- Make the client a bit more tolerant during bursts:
export PREFECT_CLIENT_MAX_RETRIES=10
export PREFECT_CLIENT_RETRY_JITTER_FACTOR=0.5
3) System limits on the Mac
- Raise file descriptor limits to avoid “too many open files” under burst load:
ulimit -n 65536
If you run the server via a LaunchAgent or another supervisor, set the limits there as well.
Useful stopgaps if you must stay on SQLite
- Reduce the number of workers (e.g., 3–5) and/or stagger their starts aggressively. A small number of workers can still launch many flow runs; 20 workers is usually unnecessary and creates avoidable API/DB pressure.
- Keep the worker/client tuning above (poll/heartbeat/retries).
- Increase DB timeouts a bit (applies to SQLite too):
export PREFECT_SERVER_DATABASE_TIMEOUT=30
export PREFECT_SERVER_DATABASE_CONNECTION_TIMEOUT=10
export PREFECT_SERVER_API_KEEP_ALIVE_TIMEOUT=75
Note: SQLite will still be more fragile under concurrency than Postgres; Postgres is the real fix.
If you’d like, I can tailor the exact env vars/commands to your launch method. Do you want a quick “SQLite patch” for now, or are you open to switching to Postgres on the Mac mini?Tom Han
10/09/2025, 9:21 PMMarvin
10/09/2025, 9:21 PMMarvin
10/09/2025, 9:22 PMcp /path/to/your.sqlite /path/to/your.sqlite.bak
2) Start Postgres locally
- Quick local container:
docker run --name prefect-postgres \
-e POSTGRES_PASSWORD=prefect \
-e POSTGRES_USER=prefect \
-e POSTGRES_DB=prefect \
-p 5432:5432 -d postgres:15
3) Install pgloader (on macOS)
brew install pgloader
4) Stop Prefect server (make sure it’s not writing to the SQLite DB)
- Stop the process you used to start it.
5) Run the migration (SQLite -> Postgres)
- Replace the SQLite path with your actual db file (absolute path recommended).
pgloader sqlite:///absolute/path/to/your.sqlite <postgresql://prefect:prefect@localhost:5432/prefect>
- This copies tables and data, including the alembic_version table Prefect uses for migrations.
6) Point Prefect Server to Postgres and start
export PREFECT_SERVER_DATABASE_CONNECTION_URL="<postgresql+asyncpg://prefect:prefect@localhost:5432/prefect>"
export PREFECT_SERVER_DATABASE_SQLALCHEMY_POOL_SIZE=20
export PREFECT_SERVER_DATABASE_SQLALCHEMY_MAX_OVERFLOW=40
export PREFECT_SERVER_DATABASE_SQLALCHEMY_POOL_TIMEOUT=60
export PREFECT_SERVER_DATABASE_TIMEOUT=30
export PREFECT_SERVER_DATABASE_CONNECTION_TIMEOUT=10
export PREFECT_SERVER_API_KEEP_ALIVE_TIMEOUT=75
prefect server start --ui
- Verify your historical runs/deployments show up in the UI.
B) Fresh start on Postgres (no migration)
1) Start Postgres (same docker command as above).
2) Point Prefect Server at Postgres and start (same env/command as above).
3) Recreate work pools/queues/deployments as needed.
Reference: A community thread on this topic Sqlite to Postgres database migration.
Part 2 — Split of settings: Server vs Worker
Server environment (on your Mac mini where you run prefect server)
- Database backend and pooling
export PREFECT_SERVER_DATABASE_CONNECTION_URL="<postgresql+asyncpg://prefect:prefect@localhost:5432/prefect>"
# SQLAlchemy pool tuning for bursts
export PREFECT_SERVER_DATABASE_SQLALCHEMY_POOL_SIZE=20
export PREFECT_SERVER_DATABASE_SQLALCHEMY_MAX_OVERFLOW=40
export PREFECT_SERVER_DATABASE_SQLALCHEMY_POOL_TIMEOUT=60
export PREFECT_SERVER_DATABASE_SQLALCHEMY_POOL_RECYCLE=1800
# DB timeouts
export PREFECT_SERVER_DATABASE_TIMEOUT=30
export PREFECT_SERVER_DATABASE_CONNECTION_TIMEOUT=10
# API keep-alive to avoid dropped connections under load
export PREFECT_SERVER_API_KEEP_ALIVE_TIMEOUT=75
- Start the server:
prefect server start --ui
- Optional but recommended on macOS: raise file descriptor limit before starting the server:
ulimit -n 65536
Worker image/environment (where you run prefect worker start)
- Point workers to your self-hosted API:
export PREFECT_API_URL="http://<your-mac-hostname-or-ip>:4200/api"
- Smooth out polling/heartbeats to reduce synchronized spikes:
export PREFECT_WORKER_QUERY_SECONDS=15
export PREFECT_WORKER_HEARTBEAT_SECONDS=30
export PREFECT_WORKER_PREFETCH_SECONDS=10
- Increase client retry tolerance:
export PREFECT_CLIENT_MAX_RETRIES=10
export PREFECT_CLIENT_RETRY_JITTER_FACTOR=0.5
- Optional: stagger startup in your entrypoint to avoid a thundering herd:
sh -c "sleep $((RANDOM % 20 + 1)); prefect worker start -p <your-pool>"
Notes and tips
- Test with a smaller number of workers first (e.g., 3–5). You’ll often get the same throughput without spiking the API.Marvin
10/09/2025, 9:22 PMdocker exec -it prefect-postgres bash -lc "echo \"max_connections = 300\" >> /var/lib/postgresql/data/postgresql.conf && pg_ctl reload"
- If your workers run in another environment (e.g., Kubernetes), verify network reachability to your Mac mini and set PREFECT_API_URL accordingly.
If you share the path to your current SQLite DB and how you’re launching the server process, I can tailor the pgloader command and exact env exports to your setup.Tom Han
10/09/2025, 9:36 PMcd .prefect
❯ ls
config.toml memo_store.toml prefect.db-shm profiles.toml storage
dlq prefect.db prefect.db-wal server.pid
│ ~/.prefectMarvin
10/09/2025, 9:36 PMMarvin
10/09/2025, 9:37 PMprefect server stop || true
# If it’s still running:
kill "$(cat ~/.prefect/server.pid)" 2>/dev/null || true
2) Checkpoint WAL and create a clean backup of the SQLite DB
You have WAL files (prefect.db-wal/prefect.db-shm). To avoid corruption, checkpoint and back up:
sqlite3 "$HOME/.prefect/prefect.db" "PRAGMA wal_checkpoint(FULL);"
sqlite3 "$HOME/.prefect/prefect.db" ".backup '$HOME/.prefect/prefect.backup.db'"
3) Start a local Postgres
docker run --name prefect-postgres \
-e POSTGRES_PASSWORD=prefect \
-e POSTGRES_USER=prefect \
-e POSTGRES_DB=prefect \
-p 5432:5432 -d postgres:15
4) Install pgloader (macOS)
brew install pgloader
5) Migrate data from SQLite to Postgres
Use the backup you just created:
pgloader sqlite:///$HOME/.prefect/prefect.backup.db <postgresql://prefect:prefect@localhost:5432/prefect>
6) Point Prefect server at Postgres and start
- Export server env (see “Server environment” section below for the full set)
- Then start:
ulimit -n 65536
prefect server start --ui
7) Verify and finish
- Open the UI and confirm your historical runs/pools/deployments exist.
- If you ever see a migration prompt/error, run:
prefect server database upgrade
Reference: related community thread Sqlite to Postgres database migration
Part 2 — Fresh start on Postgres (no history)
If you don’t need history, skip pgloader:
- Start Postgres (step 3 above)
- Set the server environment (below)
- Start the server; Prefect will initialize the DB on first run.
Part 3 — Split of settings: Server vs Worker
Server environment (on your Mac mini where you run “prefect server”)
Set these before starting the server. These tune Postgres connections and harden the API under load.
# Point Prefect to Postgres with the asyncpg driver
export PREFECT_SERVER_DATABASE_CONNECTION_URL="<postgresql+asyncpg://prefect:prefect@localhost:5432/prefect>"
# SQLAlchemy pool tuning to handle bursts (adjust as needed)
export PREFECT_SERVER_DATABASE_SQLALCHEMY_POOL_SIZE=20
export PREFECT_SERVER_DATABASE_SQLALCHEMY_MAX_OVERFLOW=40
export PREFECT_SERVER_DATABASE_SQLALCHEMY_POOL_TIMEOUT=60
export PREFECT_SERVER_DATABASE_SQLALCHEMY_POOL_RECYCLE=1800
# DB timeouts
export PREFECT_SERVER_DATABASE_TIMEOUT=30
export PREFECT_SERVER_DATABASE_CONNECTION_TIMEOUT=10
# Keep API connections alive longer to reduce disconnects under load
export PREFECT_SERVER_API_KEEP_ALIVE_TIMEOUT=75
Start the server:
ulimit -n 65536
prefect server start --ui
Worker image/environment (where you run “prefect worker start”)
Point workers to your self-hosted API and smooth polling/heartbeats.
# Point to your server; replace <host-or-ip> with your Mac’s reachable address
export PREFECT_API_URL="http://<host-or-ip>:4200/api"
# Reduce synchronized spikes
export PREFECT_WORKER_QUERY_SECONDS=15
export PREFECT_WORKER_HEARTBEAT_SECONDS=30
export PREFECT_WORKER_PREFETCH_SECONDS=10
# Make client retries more tolerant during bursts
export PREFECT_CLIENT_MAX_RETRIES=10
export PREFECT_CLIENT_RETRY_JITTER_FACTOR=0.5
Optional: stagger worker startup to avoid a thundering herd
Add this to your worker entrypoint:
sh -c "sleep $((RANDOM % 20 + 1)); prefect worker start -p <your-pool>"
Notes
- Your SQLite path is correct: $HOME/.prefect/prefect.db. Using the .backup flow above avoids WAL pitfalls.
- If you later need to reset an empty Postgres DB, you can:
- Drop/recreate the DB, or run prefect server database reset (only if you truly want to drop all tables).Marvin
10/09/2025, 9:37 PMTom Han
10/09/2025, 9:39 PM<postgresql+psycopg2://prefect_user:talmolab@localhost/prefect_db>, should I just create a new db within this one? prefect_db does not store prefect itself's dataTom Han
10/09/2025, 9:39 PMMarvin
10/09/2025, 9:39 PMMarvin
10/09/2025, 9:39 PMprefect server.
- Set worker-side env vars (PREFECT_API_URL, polling/heartbeat, client retries) in your worker images or startup scripts, and consider staggering worker startup.
If that external thread contains specific constraints I missed (e.g., must keep SQLite, custom ports, or a proxy in front), paste the details here and I’ll adapt the commands exactly to it.Tom Han
10/09/2025, 9:40 PM<postgresql+psycopg2://prefect_user:talmolab@localhost/prefect_db>, should I just create a new db within this one? prefect_db does not store prefect itself's dataMarvin
10/09/2025, 9:40 PMMarvin
10/09/2025, 9:41 PM# 1) Create a new DB for Prefect
psql -U postgres -h localhost -c "CREATE DATABASE prefect_orch OWNER prefect_user;"
# Optional: ensure privileges are correct
psql -U postgres -h localhost -c "GRANT ALL PRIVILEGES ON DATABASE prefect_orch TO prefect_user;"
# 2) Point Prefect Server to the new DB with the async driver
export PREFECT_SERVER_DATABASE_CONNECTION_URL="<postgresql+asyncpg://prefect_user:talmolab@localhost:5432/prefect_orch>"
# 3) Initialize/upgrade the DB (idempotent)
prefect server database upgrade
# 4) Start the server
prefect server start --ui
Alternative: separate schema in your existing database
- Use if you cannot create a new database. Still isolates Prefect’s objects from your app’s schema.
- Create schema and set search_path for the Prefect role so Prefect writes to that schema.
# 1) Create a dedicated schema
psql -U postgres -h localhost -d prefect_db -c "CREATE SCHEMA IF NOT EXISTS prefect AUTHORIZATION prefect_user;"
# 2) Make Prefect role default to that schema in this DB
psql -U postgres -h localhost -d prefect_db -c "ALTER ROLE prefect_user IN DATABASE prefect_db SET search_path = prefect, public;"
# 3) Point Prefect Server at the same DB (async driver)
export PREFECT_SERVER_DATABASE_CONNECTION_URL="<postgresql+asyncpg://prefect_user:talmolab@localhost:5432/prefect_db>"
# 4) Initialize/upgrade and start
prefect server database upgrade
prefect server start --ui
Notes:
- You cannot create a database “within” a database; that’s what schemas are for. Choose “new DB” or “new schema.”
- Your app can keep using psycopg2. Prefect Server should use postgresql+asyncpg. Different clients can use different drivers against the same Postgres cluster.
- If you are migrating SQLite history with pgloader, target the new database (prefect_orch) or the existing DB after creating the dedicated schema and setting search_path.
Server vs worker settings reminder
Server (on the Mac mini):
# DB target
export PREFECT_SERVER_DATABASE_CONNECTION_URL="<postgresql+asyncpg://prefect_user:talmolab@localhost:5432/<prefect_orch_or_prefect_db>>"
# Pooling/timeouts for bursts
export PREFECT_SERVER_DATABASE_SQLALCHEMY_POOL_SIZE=20
export PREFECT_SERVER_DATABASE_SQLALCHEMY_MAX_OVERFLOW=40
export PREFECT_SERVER_DATABASE_SQLALCHEMY_POOL_TIMEOUT=60
export PREFECT_SERVER_DATABASE_SQLALCHEMY_POOL_RECYCLE=1800
export PREFECT_SERVER_DATABASE_TIMEOUT=30
export PREFECT_SERVER_DATABASE_CONNECTION_TIMEOUT=10
# API keep-alive
export PREFECT_SERVER_API_KEEP_ALIVE_TIMEOUT=75
Workers (where you run prefect worker start):
# Point to your self-hosted API
export PREFECT_API_URL="http://<your-mac-ip-or-hostname>:4200/api"
# Smooth polling/heartbeats
export PREFECT_WORKER_QUERY_SECONDS=15
export PREFECT_WORKER_HEARTBEAT_SECONDS=30
export PREFECT_WORKER_PREFETCH_SECONDS=10
# Tolerant retries
export PREFECT_CLIENT_MAX_RETRIES=10
export PREFECT_CLIENT_RETRY_JITTER_FACTOR=0.5
If you want, tell me whether you prefer a new database or a new schema and I’ll tailor the exact commands to your environment (role names/ports, pg_hba settings, etc.).Tom Han
10/09/2025, 9:50 PMMarvin
10/09/2025, 9:50 PMMarvin
10/09/2025, 9:51 PMprefect server stop || true
sqlite3 "$HOME/.prefect/prefect.db" "PRAGMA wal_checkpoint(FULL);"
sqlite3 "$HOME/.prefect/prefect.db" ".backup '$HOME/.prefect/prefect.backup.db'"
2) Create a clean Postgres DB for Prefect
psql -U postgres -h localhost -c "CREATE DATABASE prefect_orch OWNER prefect_user;"
3) Initialize the Prefect schema in Postgres
- Point Prefect’s server to Postgres (async driver):
export PREFECT_SERVER_DATABASE_CONNECTION_URL="<postgresql+asyncpg://prefect_user:talmolab@localhost:5432/prefect_orch>"
- Run migrations to create the schema:
prefect server database upgrade
4) Use pgloader with “data only” so it does not create tables
- Create a pgloader load file with absolute path to your backup (substitute your username/home path!):
# Save as prefect.load
LOAD DATABASE
FROM sqlite:///Users/your-username/.prefect/prefect.backup.db
INTO <postgresql://prefect_user:talmolab@localhost:5432/prefect_orch>
WITH data only;
- Run it:
pgloader prefect.load
Why this works: pgloader won’t attempt to issue CREATE TABLE statements (which contained SQLite-only defaults). It will just insert rows into the schema Prefect already created in Postgres.
5) Finalize and start the server
- Run migrations again just to ensure everything matches:
prefect server database upgrade
- Start the server:
ulimit -n 65536
prefect server start --ui
- Verify your historical runs/pools/deployments appear in the UI.
If you still see pgloader errors
- Confirm your load file uses an absolute SQLite path and includes “WITH data only”.
- Make sure the Postgres DB already has the Prefect tables (prefect server database upgrade).
- If you get a specific table error, share the table name/error and I’ll give a targeted CAST or per-table load snippet.
Reminder of server vs worker settings after the migration
Server (Mac mini):
export PREFECT_SERVER_DATABASE_CONNECTION_URL="<postgresql+asyncpg://prefect_user:talmolab@localhost:5432/prefect_orch>"
export PREFECT_SERVER_DATABASE_SQLALCHEMY_POOL_SIZE=20
export PREFECT_SERVER_DATABASE_SQLALCHEMY_MAX_OVERFLOW=40
export PREFECT_SERVER_DATABASE_SQLALCHEMY_POOL_TIMEOUT=60
export PREFECT_SERVER_DATABASE_SQLALCHEMY_POOL_RECYCLE=1800
export PREFECT_SERVER_DATABASE_TIMEOUT=30
export PREFECT_SERVER_DATABASE_CONNECTION_TIMEOUT=10
export PREFECT_SERVER_API_KEEP_ALIVE_TIMEOUT=75
Workers:
export PREFECT_API_URL="http://<your-mac-ip-or-hostname>:4200/api"
export PREFECT_WORKER_QUERY_SECONDS=15
export PREFECT_WORKER_HEARTBEAT_SECONDS=30
export PREFECT_WORKER_PREFETCH_SECONDS=10
export PREFECT_CLIENT_MAX_RETRIES=10
export PREFECT_CLIENT_RETRY_JITTER_FACTOR=0.5
If you prefer to place Prefect into your existing prefect_db instead of a new DB, create a dedicated schema and set search_path for the prefect_user role, then follow the same steps with the connection URL pointing to that DB.Tom Han
10/09/2025, 9:54 PMMarvin
10/09/2025, 9:54 PMMarvin
10/09/2025, 9:55 PMls -lh /Users/talmolab/.prefect/prefect.backup.db
If that prints your 1.3G file, you’re good.
2) Make sure the Prefect schema already exists in Postgres
You should have run this earlier to pre-create the tables:
export PREFECT_SERVER_DATABASE_CONNECTION_URL="<postgresql+asyncpg://prefect_user:talmolab@localhost:5432/prefect_orch>"
prefect server database upgrade
3) Create a pgloader load file with the correct absolute path and “data only”
Save this as prefect.load (note the triple slash after sqlite: and the full path to Users):
LOAD DATABASE
FROM sqlite:///Users/talmolab/.prefect/prefect.backup.db
INTO <postgresql://prefect_user:talmolab@localhost:5432/prefect_orch>
WITH data only,
batch rows = 5000,
prefetch rows = 10000;
4) Run pgloader
pgloader prefect.load
If you still see CANTOPEN
- Verify the path is exactly /Users/talmolab/.prefect/prefect.backup.db (no tilde, no $HOME, no angle brackets).
- Check the file opens via sqlite3:
sqlite3 /Users/talmolab/.prefect/prefect.backup.db "PRAGMA integrity_check;"
- If you see -shm/-wal alongside the backup, that’s okay. You can also run:
sqlite3 /Users/talmolab/.prefect/prefect.backup.db "PRAGMA wal_checkpoint(FULL); VACUUM;"
then retry pgloader.
5) Finalize
- Ensure the schema is up to date:
prefect server database upgrade
- Start the server:
ulimit -n 65536
prefect server start --ui
You should now see your historical runs/pools/deployments in the UI. If pgloader errors on a specific table after this, paste the exact error and table name and I’ll give a targeted CAST or tweak.Tom Han
10/09/2025, 10:00 PMMarvin
10/09/2025, 10:01 PMTom Han
10/09/2025, 10:01 PMMarvin
10/09/2025, 10:01 PMMarvin
10/09/2025, 10:02 PMprefect server stop || true
2) Point Prefect to the SQLite DB (use an absolute path with four slashes)
export PREFECT_SERVER_DATABASE_CONNECTION_URL="sqlite+aiosqlite:////Users/talmolab/.prefect/prefect.db"
3) (Optional but safe) Clear Postgres‑specific pool env vars so they don’t linger
unset PREFECT_SERVER_DATABASE_SQLALCHEMY_POOL_SIZE
unset PREFECT_SERVER_DATABASE_SQLALCHEMY_MAX_OVERFLOW
unset PREFECT_SERVER_DATABASE_SQLALCHEMY_POOL_TIMEOUT
unset PREFECT_SERVER_DATABASE_SQLALCHEMY_POOL_RECYCLE
unset PREFECT_SERVER_DATABASE_TIMEOUT
unset PREFECT_SERVER_DATABASE_CONNECTION_TIMEOUT
4) Ensure schema is up to date
prefect server database upgrade
5) Start the server
prefect server start --ui
Option B — Use the backup you created (~/.prefect/prefect.backup.db)
If you want to roll back specifically to the backup snapshot:
1) Stop the server
prefect server stop || true
2) Swap the backup in as the active DB
cp -f "$HOME/.prefect/prefect.backup.db" "$HOME/.prefect/prefect.db"
3) Point Prefect to the SQLite DB and clear Postgres vars (same as above)
export PREFECT_SERVER_DATABASE_CONNECTION_URL="sqlite+aiosqlite:////Users/talmolab/.prefect/prefect.db"
unset PREFECT_SERVER_DATABASE_SQLALCHEMY_POOL_SIZE
unset PREFECT_SERVER_DATABASE_SQLALCHEMY_MAX_OVERFLOW
unset PREFECT_SERVER_DATABASE_SQLALCHEMY_POOL_TIMEOUT
unset PREFECT_SERVER_DATABASE_SQLALCHEMY_POOL_RECYCLE
unset PREFECT_SERVER_DATABASE_TIMEOUT
unset PREFECT_SERVER_DATABASE_CONNECTION_TIMEOUT
4) Ensure schema is up to date, then start the server
prefect server database upgrade
prefect server start --ui
Notes
- The critical piece is the connection URL: use sqlite+aiosqlite and an absolute file path, e.g. sqlite+aiosqlite:////Users/talmolab/.prefect/prefect.db.
- If you had set the Postgres URL in a profile file (~/.prefect/profiles.toml), either remove it there or override it with the environment variable above.
- Workers don’t change. They will keep pointing at your same API URL (PREFECT_API_URL) and won’t care whether the server is backed by SQLite or Postgres.
- If you spun up a Postgres container for testing, you can stop/remove it after rollback:
docker stop prefect-postgres 2>/dev/null || true
docker rm prefect-postgres 2>/dev/null || true
If you want this rollback to be permanent across shells, add the SQLite connection URL export to your shell profile (e.g., ~/.zshrc) or your Prefect profile config.Simon Hamma
10/13/2025, 7:10 AMTom Han
10/13/2025, 4:11 PMSimon Hamma
10/14/2025, 6:47 AMTom Han
10/15/2025, 4:51 PM