<@ULVA73B9P> ``` + Exception Group Traceback (most...
# ask-marvin
t
@Marvin
Copy code
+ Exception Group Traceback (most recent call last):
...
| raise RemoteProtocolError(msg)
| httpcore.RemoteProtocolError: Server disconnected without sending a response.
|
| The above exception was the direct cause of the following exception:
|
| Traceback (most recent call last):
| File "/opt/venv/lib/python3.12/site-packages/prefect/runner/runner.py", line 982, in _cancel_run
| flow_run = await self._client.read_flow_run(flow_run)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "/opt/venv/lib/python3.12/site-packages/prefect/client/orchestration/_flow_runs/client.py", line 716, in read_flow_run
| response = await self.request(
| ^^^^^^^^^^^^^^^^^^^
| File "/opt/venv/lib/python3.12/site-packages/prefect/client/orchestration/base.py", line 53, in request
| return await self._client.send(request)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "/opt/venv/lib/python3.12/site-packages/prefect/client/base.py", line 348, in send
| response = await self._send_with_retry(
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "/opt/venv/lib/python3.12/site-packages/prefect/client/base.py", line 268, in _send_with_retry
| response = await send(request, *send_args, **send_kwargs)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "/opt/venv/lib/python3.12/site-packages/httpx/_client.py", line 1629, in send
| response = await self._send_handling_auth(
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "/opt/venv/lib/python3.12/site-packages/httpx/_client.py", line 1657, in _send_handling_auth
| response = await self._send_handling_redirects(
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "/opt/venv/lib/python3.12/site-packages/httpx/_client.py", line 1694, in _send_handling_redirects
| response = await self._send_single_request(request)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "/opt/venv/lib/python3.12/site-packages/httpx/_client.py", line 1730, in _send_single_request
| response = await transport.handle_async_request(request)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "/opt/venv/lib/python3.12/site-packages/httpx/_transports/default.py", line 393, in handle_async_request
| with map_httpcore_exceptions():
| File "/usr/lib/python3.12/contextlib.py", line 158, in __exit__
| self.gen.throw(value)
| File "/opt/venv/lib/python3.12/site-packages/httpx/_transports/default.py", line 118, in map_httpcore_exceptions
| raise mapped_exc(message) from exc
| httpx.RemoteProtocolError: Server disconnected without sending a response.
+------------------------------------
An exception occurred.
When I spawn a lot of workers at the same time, I think it clogs up my self hosted prefect server, and that leads to my pods failing. How should I alter my self hosted server to prevent this? I am using a mac mini.
m
thought for 288.0 seconds
Thanks for sharing the traceback. That httpx RemoteProtocolError usually means the API process closed the connection under load (e.g., too many concurrent requests when a bunch of workers start at once). On a single self-hosted node (Mac mini), a “thundering herd” of workers can overload the API or its database, then clients see disconnects and pods fail. A few pragmatic levers you can pull to harden your self-hosted server and smooth worker startup: Questions to narrow this down - Which Prefect version? (I’ll assume 3.x unless you say otherwise.) - What database is your server using: SQLite or Postgres? - Roughly how many workers start at the same time? - How are you running the server on the Mac mini (bare process, Docker/Compose, reverse proxy)? - Any nginx/traefik/proxy in front of the API? Server-side hardening (recommended) - Use Postgres (not SQLite) for concurrency - If you’re on SQLite, bursty connections are a common bottleneck. Switch to Postgres and raise the DB pool sizes. - Increase DB pool capacity and timeouts - Prefect 3 exposes SQLAlchemy pool settings. Example environment variables:
Copy code
# Database connectivity
  PREFECT_SERVER_DATABASE_CONNECTION_URL=<postgresql+asyncpg://prefect:prefect@postgres:5432/prefect>
  PREFECT_SERVER_DATABASE_TIMEOUT=30
  PREFECT_SERVER_DATABASE_CONNECTION_TIMEOUT=10

  # SQLAlchemy pool (increase for bursts)
  PREFECT_SERVER_DATABASE_SQLALCHEMY_POOL_SIZE=20
  PREFECT_SERVER_DATABASE_SQLALCHEMY_MAX_OVERFLOW=40
  PREFECT_SERVER_DATABASE_SQLALCHEMY_POOL_TIMEOUT=60
  PREFECT_SERVER_DATABASE_SQLALCHEMY_POOL_RECYCLE=1800
- Make sure your Postgres has sufficient max_connections for this. - Increase API keep-alive timeout - Prevents premature connection drops during bursts:
Copy code
PREFECT_SERVER_API_KEEP_ALIVE_TIMEOUT=75
- Or via CLI:
prefect server start --keep-alive-timeout 75
- OS limits: raise file descriptors on the Mac mini - A burst of connections can hit ulimit defaults. Raise to something like 4096–65536:
Copy code
sudo launchctl limit maxfiles 65536 65536
  ulimit -n 65536
- If you run a reverse proxy (nginx/traefik) - Increase keepalive/read timeouts, and set a small keepalive connection pool to the upstream API. For nginx:
Copy code
keepalive_timeout 75s;
  proxy_read_timeout 75s;
  proxy_connect_timeout 10s;
  proxy_send_timeout 75s;
  keepalive 64;
Worker/client-side smoothing - Stagger worker starts to avoid a thundering herd - If you’re launching many pods at once, add a small random sleep before `prefect worker start`:
Copy code
sh -c "sleep $((RANDOM % 20 + 1)); prefect worker start -p <your-pool>"
- Reduce poll frequency and spread it out - Workers poll the API every
query_seconds
and send heartbeats every
heartbeat_seconds
. Bump these slightly (and/or vary them per worker) to reduce synchronized spikes:
Copy code
PREFECT_WORKER_QUERY_SECONDS=15
  PREFECT_WORKER_HEARTBEAT_SECONDS=30
  PREFECT_WORKER_PREFETCH_SECONDS=10
- Increase client retry tolerance on transient failures - The Prefect client already retries, but you can make it more forgiving during bursts:
Copy code
PREFECT_CLIENT_MAX_RETRIES=10
  PREFECT_CLIENT_RETRY_JITTER_FACTOR=0.5
- Optional: reduce noisy API logging during scale-ups - If you’re sending a lot of logs to the API, consider turning it off temporarily for load tests to see if it’s contributing to pressure. Quick “known-good” Compose-style sketch - If you’re using Docker/Compose, here’s an example of the knobs in one place: ``` services: prefect-server: image: prefecthq/prefect:3-latest environment: PREFECT_SERVER_API_KEEP_ALIVE_TIMEOUT: "75" PREFECT_SERVER_DATABASE_CONNECTION_URL: "postgresql+asyncpg://prefect:prefect@postgres:5432/prefect" PREFECT_SERVER_DATABASE_SQLALCHEMY_POOL_SIZE: "20" PREFECT_SERVER_DATABASE_SQLALCHEMY_MAX_OVERFLOW: "40" PREFECT_SERVER_DATABASE_SQLALCHEMY_POOL_TIMEOUT: "60" command: >
bash -lc "prefect server start --ui" ports: - "4200:4200" postgres: image: postgres:15 environment: POSTGRES_USER: prefect POSTGRES_PASSWORD: prefect POSTGRES_DB: prefect command: ["postgres", "-c", "max_connections=300"] volumes: - pgdata:/var/lib/postgresql/data volumes: pgdata: {} ``` Why this helps - The error you shared is consistent with connection churn under load. Increasing keep-alives and DB pool capacity, plus staggering worker starts and reducing synchronized polling, usually eliminates transient disconnects. - SQLite is often the hidden culprit for bursty workloads; Postgres + tuned pooling is much more resilient. If you can share: - your Prefect version, - current DB (SQLite/Postgres), - approximate number of workers starting at once, - and how you run the server (bare process vs Docker/Compose, any proxy), I can tailor exact env var values and a config snippet for your setup.
t
@Marvin Prefect 3.4.22, DB is sqlite, around 20 workers. It's a bare process on my mac.
m
thought for 48.8 seconds
Thanks — that helps. With Prefect 3.4.22 + SQLite + ~20 workers on a single Mac, you’re likely hitting SQLite’s concurrency limits plus a “thundering herd” of connections as workers start/poll together. That can cause the API to drop connections and yields the httpx RemoteProtocolError you’re seeing. The most effective fix is to move your server to Postgres and slightly tune both the server and workers. If you must stay on SQLite for now, I’ve included stopgap mitigations too. Recommended path (robust) 1) Switch server DB to Postgres - Spin up Postgres locally (Docker example):
Copy code
docker run --name prefect-postgres \
    -e POSTGRES_PASSWORD=prefect \
    -e POSTGRES_USER=prefect \
    -e POSTGRES_DB=prefect \
    -p 5432:5432 -d postgres:15
- Point Prefect Server at Postgres and increase pool sizes/timeouts:
Copy code
export PREFECT_SERVER_DATABASE_CONNECTION_URL="<postgresql+asyncpg://prefect:prefect@localhost:5432/prefect>"
  export PREFECT_SERVER_DATABASE_SQLALCHEMY_POOL_SIZE=20
  export PREFECT_SERVER_DATABASE_SQLALCHEMY_MAX_OVERFLOW=40
  export PREFECT_SERVER_DATABASE_SQLALCHEMY_POOL_TIMEOUT=60
  export PREFECT_SERVER_DATABASE_TIMEOUT=30
  export PREFECT_SERVER_DATABASE_CONNECTION_TIMEOUT=10
  export PREFECT_SERVER_API_KEEP_ALIVE_TIMEOUT=75
  prefect server start --ui
2) Smooth out worker load on the API - Stagger worker startup so they don’t all connect/poll at once:
Copy code
sh -c "sleep $((RANDOM % 20 + 1)); prefect worker start -p <your-pool>"
- Reduce poll frequency and spread heartbeats a bit:
Copy code
export PREFECT_WORKER_QUERY_SECONDS=15
  export PREFECT_WORKER_HEARTBEAT_SECONDS=30
  export PREFECT_WORKER_PREFETCH_SECONDS=10
- Make the client a bit more tolerant during bursts:
Copy code
export PREFECT_CLIENT_MAX_RETRIES=10
  export PREFECT_CLIENT_RETRY_JITTER_FACTOR=0.5
3) System limits on the Mac - Raise file descriptor limits to avoid “too many open files” under burst load:
Copy code
ulimit -n 65536
If you run the server via a LaunchAgent or another supervisor, set the limits there as well. Useful stopgaps if you must stay on SQLite - Reduce the number of workers (e.g., 3–5) and/or stagger their starts aggressively. A small number of workers can still launch many flow runs; 20 workers is usually unnecessary and creates avoidable API/DB pressure. - Keep the worker/client tuning above (poll/heartbeat/retries). - Increase DB timeouts a bit (applies to SQLite too):
Copy code
export PREFECT_SERVER_DATABASE_TIMEOUT=30
  export PREFECT_SERVER_DATABASE_CONNECTION_TIMEOUT=10
  export PREFECT_SERVER_API_KEEP_ALIVE_TIMEOUT=75
Note: SQLite will still be more fragile under concurrency than Postgres; Postgres is the real fix. If you’d like, I can tailor the exact env vars/commands to your launch method. Do you want a quick “SQLite patch” for now, or are you open to switching to Postgres on the Mac mini?
t
@Marvin 1. Give me code or steps on how to migrate from sqlite to postgres. 2. For all those flags, separate them into what I should do in the environment where the server is, and what I should do in the worker's image.
m
thought for 63.2 seconds
Great—here’s a clear path to move your Prefect 3.4.22 server from SQLite to Postgres, plus a clean split of server vs worker settings. Part 1 — Migrate from SQLite to Postgres Decide whether you want to preserve history: - Keep history (migrate data): Use pgloader to copy your SQLite DB into Postgres. - Fresh start (no history): Point Prefect at an empty Postgres DB and let it initialize. A) Preserve history with pgloader 1) Back up your SQLite database - Find your SQLite file (commonly under your PREFECT_HOME; if unsure, share your path). - Make a copy:
Copy code
cp /path/to/your.sqlite /path/to/your.sqlite.bak
2) Start Postgres locally - Quick local container:
Copy code
docker run --name prefect-postgres \
  -e POSTGRES_PASSWORD=prefect \
  -e POSTGRES_USER=prefect \
  -e POSTGRES_DB=prefect \
  -p 5432:5432 -d postgres:15
3) Install pgloader (on macOS)
Copy code
brew install pgloader
4) Stop Prefect server (make sure it’s not writing to the SQLite DB) - Stop the process you used to start it. 5) Run the migration (SQLite -> Postgres) - Replace the SQLite path with your actual db file (absolute path recommended).
Copy code
pgloader sqlite:///absolute/path/to/your.sqlite <postgresql://prefect:prefect@localhost:5432/prefect>
- This copies tables and data, including the alembic_version table Prefect uses for migrations. 6) Point Prefect Server to Postgres and start
Copy code
export PREFECT_SERVER_DATABASE_CONNECTION_URL="<postgresql+asyncpg://prefect:prefect@localhost:5432/prefect>"
export PREFECT_SERVER_DATABASE_SQLALCHEMY_POOL_SIZE=20
export PREFECT_SERVER_DATABASE_SQLALCHEMY_MAX_OVERFLOW=40
export PREFECT_SERVER_DATABASE_SQLALCHEMY_POOL_TIMEOUT=60
export PREFECT_SERVER_DATABASE_TIMEOUT=30
export PREFECT_SERVER_DATABASE_CONNECTION_TIMEOUT=10
export PREFECT_SERVER_API_KEEP_ALIVE_TIMEOUT=75

prefect server start --ui
- Verify your historical runs/deployments show up in the UI. B) Fresh start on Postgres (no migration) 1) Start Postgres (same docker command as above). 2) Point Prefect Server at Postgres and start (same env/command as above). 3) Recreate work pools/queues/deployments as needed. Reference: A community thread on this topic Sqlite to Postgres database migration. Part 2 — Split of settings: Server vs Worker Server environment (on your Mac mini where you run prefect server) - Database backend and pooling
Copy code
export PREFECT_SERVER_DATABASE_CONNECTION_URL="<postgresql+asyncpg://prefect:prefect@localhost:5432/prefect>"

# SQLAlchemy pool tuning for bursts
export PREFECT_SERVER_DATABASE_SQLALCHEMY_POOL_SIZE=20
export PREFECT_SERVER_DATABASE_SQLALCHEMY_MAX_OVERFLOW=40
export PREFECT_SERVER_DATABASE_SQLALCHEMY_POOL_TIMEOUT=60
export PREFECT_SERVER_DATABASE_SQLALCHEMY_POOL_RECYCLE=1800

# DB timeouts
export PREFECT_SERVER_DATABASE_TIMEOUT=30
export PREFECT_SERVER_DATABASE_CONNECTION_TIMEOUT=10

# API keep-alive to avoid dropped connections under load
export PREFECT_SERVER_API_KEEP_ALIVE_TIMEOUT=75
- Start the server:
Copy code
prefect server start --ui
- Optional but recommended on macOS: raise file descriptor limit before starting the server:
Copy code
ulimit -n 65536
Worker image/environment (where you run prefect worker start) - Point workers to your self-hosted API:
Copy code
export PREFECT_API_URL="http://<your-mac-hostname-or-ip>:4200/api"
- Smooth out polling/heartbeats to reduce synchronized spikes:
Copy code
export PREFECT_WORKER_QUERY_SECONDS=15
export PREFECT_WORKER_HEARTBEAT_SECONDS=30
export PREFECT_WORKER_PREFETCH_SECONDS=10
- Increase client retry tolerance:
Copy code
export PREFECT_CLIENT_MAX_RETRIES=10
export PREFECT_CLIENT_RETRY_JITTER_FACTOR=0.5
- Optional: stagger startup in your entrypoint to avoid a thundering herd:
Copy code
sh -c "sleep $((RANDOM % 20 + 1)); prefect worker start -p <your-pool>"
Notes and tips - Test with a smaller number of workers first (e.g., 3–5). You’ll often get the same throughput without spiking the API.
- Ensure Postgres has enough connections. The default container is usually fine for this scale, but you can bump:
Copy code
docker exec -it prefect-postgres bash -lc "echo \"max_connections = 300\" >> /var/lib/postgresql/data/postgresql.conf && pg_ctl reload"
- If your workers run in another environment (e.g., Kubernetes), verify network reachability to your Mac mini and set PREFECT_API_URL accordingly. If you share the path to your current SQLite DB and how you’re launching the server process, I can tailor the pgloader command and exact env exports to your setup.
t
@Marvin
Copy code
cd .prefect
❯ ls
config.toml     memo_store.toml prefect.db-shm  profiles.toml   storage
dlq             prefect.db      prefect.db-wal  server.pid

  │  ~/.prefect
m
thought for 58.2 seconds
Perfect — since your SQLite DB is at ~/.prefect/prefect.db, here are concrete migration steps and a clean split of server vs worker settings. Part 1 — Migrate from SQLite to Postgres (preserve history) 1) Stop Prefect Server and ensure nothing is writing to SQLite
Copy code
prefect server stop || true
# If it’s still running:
kill "$(cat ~/.prefect/server.pid)" 2>/dev/null || true
2) Checkpoint WAL and create a clean backup of the SQLite DB You have WAL files (prefect.db-wal/prefect.db-shm). To avoid corruption, checkpoint and back up:
Copy code
sqlite3 "$HOME/.prefect/prefect.db" "PRAGMA wal_checkpoint(FULL);"
sqlite3 "$HOME/.prefect/prefect.db" ".backup '$HOME/.prefect/prefect.backup.db'"
3) Start a local Postgres
Copy code
docker run --name prefect-postgres \
  -e POSTGRES_PASSWORD=prefect \
  -e POSTGRES_USER=prefect \
  -e POSTGRES_DB=prefect \
  -p 5432:5432 -d postgres:15
4) Install pgloader (macOS)
Copy code
brew install pgloader
5) Migrate data from SQLite to Postgres Use the backup you just created:
Copy code
pgloader sqlite:///$HOME/.prefect/prefect.backup.db <postgresql://prefect:prefect@localhost:5432/prefect>
6) Point Prefect server at Postgres and start - Export server env (see “Server environment” section below for the full set) - Then start:
Copy code
ulimit -n 65536
prefect server start --ui
7) Verify and finish - Open the UI and confirm your historical runs/pools/deployments exist. - If you ever see a migration prompt/error, run:
Copy code
prefect server database upgrade
Reference: related community thread Sqlite to Postgres database migration Part 2 — Fresh start on Postgres (no history) If you don’t need history, skip pgloader: - Start Postgres (step 3 above) - Set the server environment (below) - Start the server; Prefect will initialize the DB on first run. Part 3 — Split of settings: Server vs Worker Server environment (on your Mac mini where you run “prefect server”) Set these before starting the server. These tune Postgres connections and harden the API under load.
Copy code
# Point Prefect to Postgres with the asyncpg driver
export PREFECT_SERVER_DATABASE_CONNECTION_URL="<postgresql+asyncpg://prefect:prefect@localhost:5432/prefect>"

# SQLAlchemy pool tuning to handle bursts (adjust as needed)
export PREFECT_SERVER_DATABASE_SQLALCHEMY_POOL_SIZE=20
export PREFECT_SERVER_DATABASE_SQLALCHEMY_MAX_OVERFLOW=40
export PREFECT_SERVER_DATABASE_SQLALCHEMY_POOL_TIMEOUT=60
export PREFECT_SERVER_DATABASE_SQLALCHEMY_POOL_RECYCLE=1800

# DB timeouts
export PREFECT_SERVER_DATABASE_TIMEOUT=30
export PREFECT_SERVER_DATABASE_CONNECTION_TIMEOUT=10

# Keep API connections alive longer to reduce disconnects under load
export PREFECT_SERVER_API_KEEP_ALIVE_TIMEOUT=75
Start the server:
Copy code
ulimit -n 65536
prefect server start --ui
Worker image/environment (where you run “prefect worker start”) Point workers to your self-hosted API and smooth polling/heartbeats.
Copy code
# Point to your server; replace <host-or-ip> with your Mac’s reachable address
export PREFECT_API_URL="http://<host-or-ip>:4200/api"

# Reduce synchronized spikes
export PREFECT_WORKER_QUERY_SECONDS=15
export PREFECT_WORKER_HEARTBEAT_SECONDS=30
export PREFECT_WORKER_PREFETCH_SECONDS=10

# Make client retries more tolerant during bursts
export PREFECT_CLIENT_MAX_RETRIES=10
export PREFECT_CLIENT_RETRY_JITTER_FACTOR=0.5
Optional: stagger worker startup to avoid a thundering herd Add this to your worker entrypoint:
Copy code
sh -c "sleep $((RANDOM % 20 + 1)); prefect worker start -p <your-pool>"
Notes - Your SQLite path is correct: $HOME/.prefect/prefect.db. Using the .backup flow above avoids WAL pitfalls. - If you later need to reset an empty Postgres DB, you can: - Drop/recreate the DB, or run
prefect server database reset
(only if you truly want to drop all tables).
- For 20 workers, Postgres + the tuning above should eliminate the “Server disconnected without sending a response” errors. You can also try fewer workers; throughput is often similar without the burst load on the API. If you want, share your Mac’s IP/hostname for PREFECT_API_URL and I can provide a ready-to-paste worker env block.
t
If I already have a postgres database (that's for actual business logic) on
<postgresql+psycopg2://prefect_user:talmolab@localhost/prefect_db>
, should I just create a new db within this one? prefect_db does not store prefect itself's data
m
thought for 18.4 seconds
Thanks for the link! I can’t access Slack threads outside this workspace, but based on what we’ve discussed I believe the steps and env split I provided will cover your case. Quick recap of the actionable items you can run now: - Migrate your SQLite at ~/.prefect/prefect.db to Postgres with pgloader (commands included above). - Set server-side env vars (DB URL, pool sizes, timeouts, keep-alive) on your Mac mini before starting
prefect server
. - Set worker-side env vars (PREFECT_API_URL, polling/heartbeat, client retries) in your worker images or startup scripts, and consider staggering worker startup. If that external thread contains specific constraints I missed (e.g., must keep SQLite, custom ports, or a proxy in front), paste the details here and I’ll adapt the commands exactly to it.
t
@Marvin f I already have a postgres database (that's for actual business logic) on
<postgresql+psycopg2://prefect_user:talmolab@localhost/prefect_db>
, should I just create a new db within this one? prefect_db does not store prefect itself's data
m
thought for 56.1 seconds
Short answer: don’t put Prefect’s tables into your app’s existing schema. Use either a separate Postgres database (preferred) or, if that’s not possible, a dedicated schema within your existing database. Also, Prefect Server should use the asyncpg driver, so use a postgresql+asyncpg URL even if your app uses psycopg2. Options Preferred: separate database for Prefect - Why: clean isolation, easy lifecycle management, permissions, and no chance of table name collisions. Steps:
Copy code
# 1) Create a new DB for Prefect
psql -U postgres -h localhost -c "CREATE DATABASE prefect_orch OWNER prefect_user;"

# Optional: ensure privileges are correct
psql -U postgres -h localhost -c "GRANT ALL PRIVILEGES ON DATABASE prefect_orch TO prefect_user;"

# 2) Point Prefect Server to the new DB with the async driver
export PREFECT_SERVER_DATABASE_CONNECTION_URL="<postgresql+asyncpg://prefect_user:talmolab@localhost:5432/prefect_orch>"

# 3) Initialize/upgrade the DB (idempotent)
prefect server database upgrade

# 4) Start the server
prefect server start --ui
Alternative: separate schema in your existing database - Use if you cannot create a new database. Still isolates Prefect’s objects from your app’s schema. - Create schema and set search_path for the Prefect role so Prefect writes to that schema.
Copy code
# 1) Create a dedicated schema
psql -U postgres -h localhost -d prefect_db -c "CREATE SCHEMA IF NOT EXISTS prefect AUTHORIZATION prefect_user;"

# 2) Make Prefect role default to that schema in this DB
psql -U postgres -h localhost -d prefect_db -c "ALTER ROLE prefect_user IN DATABASE prefect_db SET search_path = prefect, public;"

# 3) Point Prefect Server at the same DB (async driver)
export PREFECT_SERVER_DATABASE_CONNECTION_URL="<postgresql+asyncpg://prefect_user:talmolab@localhost:5432/prefect_db>"

# 4) Initialize/upgrade and start
prefect server database upgrade
prefect server start --ui
Notes: - You cannot create a database “within” a database; that’s what schemas are for. Choose “new DB” or “new schema.” - Your app can keep using psycopg2. Prefect Server should use postgresql+asyncpg. Different clients can use different drivers against the same Postgres cluster. - If you are migrating SQLite history with pgloader, target the new database (prefect_orch) or the existing DB after creating the dedicated schema and setting search_path. Server vs worker settings reminder Server (on the Mac mini):
Copy code
# DB target
export PREFECT_SERVER_DATABASE_CONNECTION_URL="<postgresql+asyncpg://prefect_user:talmolab@localhost:5432/<prefect_orch_or_prefect_db>>"

# Pooling/timeouts for bursts
export PREFECT_SERVER_DATABASE_SQLALCHEMY_POOL_SIZE=20
export PREFECT_SERVER_DATABASE_SQLALCHEMY_MAX_OVERFLOW=40
export PREFECT_SERVER_DATABASE_SQLALCHEMY_POOL_TIMEOUT=60
export PREFECT_SERVER_DATABASE_SQLALCHEMY_POOL_RECYCLE=1800
export PREFECT_SERVER_DATABASE_TIMEOUT=30
export PREFECT_SERVER_DATABASE_CONNECTION_TIMEOUT=10

# API keep-alive
export PREFECT_SERVER_API_KEEP_ALIVE_TIMEOUT=75
Workers (where you run prefect worker start):
Copy code
# Point to your self-hosted API
export PREFECT_API_URL="http://<your-mac-ip-or-hostname>:4200/api"

# Smooth polling/heartbeats
export PREFECT_WORKER_QUERY_SECONDS=15
export PREFECT_WORKER_HEARTBEAT_SECONDS=30
export PREFECT_WORKER_PREFETCH_SECONDS=10

# Tolerant retries
export PREFECT_CLIENT_MAX_RETRIES=10
export PREFECT_CLIENT_RETRY_JITTER_FACTOR=0.5
If you want, tell me whether you prefer a new database or a new schema and I’ll tailor the exact commands to your environment (role names/ports, pg_hba settings, etc.).
t
@Marvin pgloader sqlite:///$HOME/.prefect/prefect.backup.db postgresql://prefect_user:talmolab@localhost:5432/prefect_orch 2025-10-09T175011.003886-04:00 LOG pgloader version "3.6.9" 2025-10-09T175011.026813-04:00 LOG Migrating from #<SQLITE-CONNECTION sqlite:///Users/talmolab/.prefect/prefect.backup.db {700B264753}> 2025-10-09T175011.026887-04:00 LOG Migrating into #<PGSQL-CONNECTION pgsql://prefect_user@localhost:5432/prefect_orch {700B3AFFD3}> 2025-10-09T175011.097060-04:00 ERROR Database error 22007: invalid input syntax for type timestamp with time zone: "strftime('%Y-%m-%d %H:%M:%f000', 'now')" QUERY: CREATE TABLE flow ( id text default '( lower(hex(randomblob(4))) || ''-'' || lower(hex(randomblob(2))) || ''-4'' || substr(lower(hex(randomblob(2))),2) || ''-'' || substr(''89ab'',abs(random()) % 4 + 1, 1) || substr(lower(hex(randomblob(2))),2) || ''-'' || lower(hex(randomblob(6))) )', created timestamptz default 'strftime(''%Y-%m-%d %H:%M:%f000'', ''now'')', updated timestamptz default 'strftime(''%Y-%m-%d %H:%M:%f000'', ''now'')', name text, tags json default '[]', labels json ); 2025-10-09T175011.098017-04:00 FATAL Failed to create the schema, see above. 2025-10-09T175011.098657-04:00 LOG report summary reset table name errors rows bytes total time ----------------- --------- --------- --------- -------------- fetch 0 0 0.000s fetch meta data 0 255 0.019s Create Schemas 0 0 0.001s Create SQL Types 0 0 0.002s Create tables 0 0 0.000s ----------------- --------- --------- --------- -------------- ----------------- --------- --------- --------- --------------
m
thought for 57.0 seconds
Great catch — pgloader is trying to recreate Prefect’s tables from SQLite and is choking on SQLite-specific default expressions (e.g., strftime(...) for timestamptz). The fix: pre-create the Prefect schema in Postgres using Prefect’s own migrations, then tell pgloader to load data only (no table creation). Do this: 1) Stop Prefect and back up SQLite
Copy code
prefect server stop || true
sqlite3 "$HOME/.prefect/prefect.db" "PRAGMA wal_checkpoint(FULL);"
sqlite3 "$HOME/.prefect/prefect.db" ".backup '$HOME/.prefect/prefect.backup.db'"
2) Create a clean Postgres DB for Prefect
Copy code
psql -U postgres -h localhost -c "CREATE DATABASE prefect_orch OWNER prefect_user;"
3) Initialize the Prefect schema in Postgres - Point Prefect’s server to Postgres (async driver):
Copy code
export PREFECT_SERVER_DATABASE_CONNECTION_URL="<postgresql+asyncpg://prefect_user:talmolab@localhost:5432/prefect_orch>"
- Run migrations to create the schema:
Copy code
prefect server database upgrade
4) Use pgloader with “data only” so it does not create tables - Create a pgloader load file with absolute path to your backup (substitute your username/home path!):
Copy code
# Save as prefect.load
LOAD DATABASE
  FROM sqlite:///Users/your-username/.prefect/prefect.backup.db
  INTO <postgresql://prefect_user:talmolab@localhost:5432/prefect_orch>
  WITH data only;
- Run it:
Copy code
pgloader prefect.load
Why this works: pgloader won’t attempt to issue CREATE TABLE statements (which contained SQLite-only defaults). It will just insert rows into the schema Prefect already created in Postgres. 5) Finalize and start the server - Run migrations again just to ensure everything matches:
Copy code
prefect server database upgrade
- Start the server:
Copy code
ulimit -n 65536
prefect server start --ui
- Verify your historical runs/pools/deployments appear in the UI. If you still see pgloader errors - Confirm your load file uses an absolute SQLite path and includes “WITH data only”. - Make sure the Postgres DB already has the Prefect tables (prefect server database upgrade). - If you get a specific table error, share the table name/error and I’ll give a targeted CAST or per-table load snippet. Reminder of server vs worker settings after the migration Server (Mac mini):
Copy code
export PREFECT_SERVER_DATABASE_CONNECTION_URL="<postgresql+asyncpg://prefect_user:talmolab@localhost:5432/prefect_orch>"
export PREFECT_SERVER_DATABASE_SQLALCHEMY_POOL_SIZE=20
export PREFECT_SERVER_DATABASE_SQLALCHEMY_MAX_OVERFLOW=40
export PREFECT_SERVER_DATABASE_SQLALCHEMY_POOL_TIMEOUT=60
export PREFECT_SERVER_DATABASE_SQLALCHEMY_POOL_RECYCLE=1800
export PREFECT_SERVER_DATABASE_TIMEOUT=30
export PREFECT_SERVER_DATABASE_CONNECTION_TIMEOUT=10
export PREFECT_SERVER_API_KEEP_ALIVE_TIMEOUT=75
Workers:
Copy code
export PREFECT_API_URL="http://<your-mac-ip-or-hostname>:4200/api"
export PREFECT_WORKER_QUERY_SECONDS=15
export PREFECT_WORKER_HEARTBEAT_SECONDS=30
export PREFECT_WORKER_PREFETCH_SECONDS=10
export PREFECT_CLIENT_MAX_RETRIES=10
export PREFECT_CLIENT_RETRY_JITTER_FACTOR=0.5
If you prefer to place Prefect into your existing prefect_db instead of a new DB, create a dedicated schema and set search_path for the prefect_user role, then follow the same steps with the connection URL pointing to that DB.
t
@Marvin pgloader prefect.load 2025-10-09T175407.002412-04:00 LOG pgloader version "3.6.9" 2025-10-09T175407.037163-04:00 LOG Migrating from #<SQLITE-CONNECTION sqlite:///User/talmolab/.prefect/prefect.backup.db {700BD19213}> 2025-10-09T175407.037229-04:00 LOG Migrating into #<PGSQL-CONNECTION pgsql://prefect_user@localhost:5432/prefect_orch {700BD192A3}> 2025-10-09T175407.040944-04:00 ERROR sqlite: Failed to open sqlite file #P"/User/talmolab/.prefect/prefect.backup.db": Could not open sqlite3 database /User/talmolab/.prefect/prefect.backup.db Code CANTOPEN: no message. 2025-10-09T175407.040964-04:00 LOG report summary reset table name errors rows bytes total time ----------------- --------- --------- --------- -------------- fetch 0 0 0.000s fetch meta data 0 0 0.000s ----------------- --------- --------- --------- -------------- ----------------- --------- --------- --------- -------------- ❯ cd .prefect ❯ kls zsh: command not found: kls ❯ ll total 5628592 -rw-r--r-- 1 talmolab staff 102B Aug 20 18:57 config.toml drwxr-xr-x 16 talmolab staff 512B Oct 9 08:59 dlq -rw------- 1 talmolab staff 93B Aug 27 16:41 memo_store.toml -rw-r--r-- 1 talmolab staff 1.3G Oct 9 14:49 prefect.backup.db -rw-r--r-- 1 talmolab staff 32K Oct 9 14:50 prefect.backup.db-shm -rw-r--r-- 1 talmolab staff 0B Oct 9 14:50 prefect.backup.db-wal
m
thought for 47.7 seconds
pgloader can’t find your SQLite file because the path in the log is wrong: it shows sqlite:///User/talmolab/... (missing the “s” in Users). Let’s fix that and rerun with a load file that uses “data only”. Do this: 1) Sanity check the absolute path
Copy code
ls -lh /Users/talmolab/.prefect/prefect.backup.db
If that prints your 1.3G file, you’re good. 2) Make sure the Prefect schema already exists in Postgres You should have run this earlier to pre-create the tables:
Copy code
export PREFECT_SERVER_DATABASE_CONNECTION_URL="<postgresql+asyncpg://prefect_user:talmolab@localhost:5432/prefect_orch>"
prefect server database upgrade
3) Create a pgloader load file with the correct absolute path and “data only” Save this as prefect.load (note the triple slash after sqlite: and the full path to Users):
Copy code
LOAD DATABASE
  FROM sqlite:///Users/talmolab/.prefect/prefect.backup.db
  INTO <postgresql://prefect_user:talmolab@localhost:5432/prefect_orch>
  WITH data only,
       batch rows = 5000,
       prefetch rows = 10000;
4) Run pgloader
Copy code
pgloader prefect.load
If you still see CANTOPEN - Verify the path is exactly /Users/talmolab/.prefect/prefect.backup.db (no tilde, no $HOME, no angle brackets). - Check the file opens via sqlite3:
Copy code
sqlite3 /Users/talmolab/.prefect/prefect.backup.db "PRAGMA integrity_check;"
- If you see -shm/-wal alongside the backup, that’s okay. You can also run:
Copy code
sqlite3 /Users/talmolab/.prefect/prefect.backup.db "PRAGMA wal_checkpoint(FULL); VACUUM;"
then retry pgloader. 5) Finalize - Ensure the schema is up to date:
Copy code
prefect server database upgrade
- Start the server:
Copy code
ulimit -n 65536
prefect server start --ui
You should now see your historical runs/pools/deployments in the UI. If pgloader errors on a specific table after this, paste the exact error and table name and I’ll give a targeted CAST or tweak.
t
@Marvin There's a ton of warnings and errors and I still don't have any data. 2025-10-09T175553.166156-04:00 ERROR Database error 23505: duplicate key value violates unique constraint "alembic_version_pkc" DETAIL: Key (version_num)=(9e83011d1f2a) already exists. CONTEXT: COPY alembic_version, line 1 2025-10-09T175556.563149-04:00 ERROR Database error 22P02: invalid input syntax for type json DETAIL: Token "98.6d0" is invalid. CONTEXT: JSON data, line 1: 98.6d0 COPY artifact, line 362, column data: "98.6d0" 2025-10-09T175556.623526-04:00 ERROR Database error 22P02: invalid input syntax for type json DETAIL: Token "99.5d0" is invalid. CONTEXT: JSON data, line 1: 99.5d0 COPY artifact_collection, line 70, column data: "99.5d0" 2025-10-09T175602.012427-04:00 ERROR Database error 22007: invalid input syntax for type interval: "1970-01-01 000000.051627" CONTEXT: COPY flow_run, line 1, column total_run_time: "1970-01-01 000000.051627" 2025-10-09T175613.659656-04:00 ERROR Database error 22007: invalid input syntax for type interval: "1970-01-01 000000.000000" CONTEXT: COPY task_run, line 1, column total_run_time: "1970-01-01 000000.000000" 2025-10-09T175615.931751-04:00 ERROR PostgreSQL Database error 23503: insert or update on table "flow_run_state" violates foreign key constraint "fk_flow_run_state__flow_run_id__flow_run" DETAIL: Key (flow_run_id)=(4cd99e02-71f0-49fe-9326-b88e5dad22ba) is not present in table "flow_run". QUERY: ALTER TABLE "public"."flow_run_state" ADD CONSTRAINT "fk_flow_run_state__flow_run_id__flow_run" FOREIGN KEY (flow_run_id) REFERENCES flow_run(id) ON DELETE CASCADE 2025-10-09T175615.935046-04:00 ERROR PostgreSQL Database error 23503: insert or update on table "flow_run_state" violates foreign key constraint "fk_flow_run_state__result_artifact_id__artifact" DETAIL: Key (result_artifact_id)=(06294852-daca-4951-82ea-7b13eec748ef) is not present in table "artifact". QUERY: ALTER TABLE "public"."flow_run_state" ADD CONSTRAINT "fk_flow_run_state__result_artifact_id__artifact" FOREIGN KEY (result_artifact_id) REFERENCES artifact(id) ON DELETE SET NULL 2025-10-09T175615.938825-04:00 ERROR PostgreSQL Database error 23503: insert or update on table "task_run_state" violates foreign key constraint "fk_task_run_state__task_run_id__task_run" DETAIL: Key (task_run_id)=(0198ce07-d4d5-7e18-ba79-88df356bb148) is not present in table "task_run". QUERY: ALTER TABLE "public"."task_run_state" ADD CONSTRAINT "fk_task_run_state__task_run_id__task_run" FOREIGN KEY (task_run_id) REFERENCES task_run(id) ON DELETE CASCADE 2025-10-09T175615.957289-04:00 ERROR PostgreSQL Database error 23503: insert or update on table "task_run_state" violates foreign key constraint "fk_task_run_state__result_artifact_id__artifact" DETAIL: Key (result_artifact_id)=(e3ffd75a-ee63-4e35-a46f-7e8908b25b0d) is not present in table "artifact". QUERY: ALTER TABLE "public"."task_run_state" ADD CONSTRAINT "fk_task_run_state__result_artifact_id__artifact" FOREIGN KEY (result_artifact_id) REFERENCES artifact(id) ON DELETE SET NULL 2025-10-09T175616.085945-04:00 LOG report summary reset table name errors rows bytes total time ------------------------------ --------- --------- --------- -------------- fetch 0 0 0.000s fetch meta data 0 69 0.015s Drop Foreign Keys 0 66 0.014s ------------------------------ --------- --------- --------- -------------- alembic_version 1 0 0.013s concurrency_limit 0 0 0.008s task_run_state_cache 0 0 0.013s configuration 0 2 0.4 kB 0.020s block_schema_reference 0 27 4.7 kB 0.048s block_document 0 6 2.6 kB 0.067s worker 0 405 80.7 kB 0.102s task_run_state 0 241820 160.5 MB 8.555s flow 0 20 2.3 kB 0.002s saved_search 0 0 0.008s agent 0 0 0.015s block_schema 0 88 188.3 kB 0.057s block_document_reference 0 1 0.2 kB 0.064s block_type 0 58 47.1 kB 0.111s flow_run_state 0 44884 37.7 MB 3.243s work_pool 0 5 14.8 kB 2.366s artifact 1 0 2.478s artifact_collection 1 0 2.465s log 0 156098 41.7 MB 6.872s csrf_token 0 0 3.423s automation_related_resource 0 1 0.2 kB 3.428s automation_bucket 0 6 6.8 kB 3.434s events 0 174510 186.9 MB 15.429s work_queue 0 5 1.0 kB 3.678s flow_run 1 0 3.896s concurrency_limit_v2 0 0 3.574s flow_run_input 0 0 3.579s automation 0 3 3.0 kB 3.587s composite_trigger_child_firing 0 0 3.592s event_resources 0 892173 201.6 MB 17.245s automation_event_follower 0 0 7.124s deployment_schedule 0 1 0.2 kB 7.141s task_run 1 0 7.701s variable 0 0 0.003s deployment 0 10 9.7 kB 0.013s deployment_version 0 0 0.006s ------------------------------ --------- --------- --------- -------------- COPY Threads Completion 0 4 22.538s Reset Sequences 0 0 0.059s Create Foreign Keys 4 29 0.303s Install Comments 0 0 0.000s ------------------------------ --------- --------- --------- -------------- Total import time 5 1510123 628.7 MB 22.901s
m
Your message was too long, here's your message at the allowed limit: There's a ton of warnings and errors and I still don't have any data 2025-10-09T17 55 53 166156-04 00 ERROR Database error 23505 duplicate key value violates unique constraint alembic_version_pkc DETAIL Key (version_num)=(9e83011d1f2a) already exists CONTEXT COPY alembic_version line 1 2025-10-09T17 55 56 563149-04 00 ERROR Database error 22P02 invalid input syntax for type json DETAIL Token 98 6d0 is invalid CONTEXT JSON data line 1 98 6d0 COPY artifact line 362 column data 98 6d0 2025-10-09T17 55 56 623526-04 00 ERROR Database error 22P02 invalid input syntax for type json DETAIL Token 99 5d0 is invalid CONTEXT JSON data line 1 99 5d0 COPY artifact_collection line 70 column data 99 5d0 2025-10-09T17 56 02 012427-04 00 ERROR Database error 22007 invalid input syntax for type interval 1970-01-01 00 00 00 051627 CONTEXT COPY flow_run line 1 column total_run_time 1970-01-01 00 00 00 051627 2025-10-09T17 56 13 659656-04 00 ERROR Database error 22007 invalid input syntax for type interval 1970-01-01 00 00 00 000000 CONTEXT COPY task_run line 1 column total_run_time 1970-01-01 00 00 00 000000 2025-10-09T17 56 15 931751-04 00 ERROR PostgreSQL Database error 23503 insert or update on table flow_run_state violates foreign key constraint fk_flow_run_state__flow_run_id__flow_run DETAIL Key (flow_run_id)=(4cd99e02-71f0-49fe-9326-b88e5dad22ba) is not present in table flow_run QUERY ALTER TABLE public flow_run_state ADD CONSTRAINT fk_flow_run_state__flow_run_id__flow_run FOREIGN KEY (flow_run_id) REFERENCES flow_run(id) ON DELETE CASCADE 2025-10-09T17 56 15 935046-04 00 ERROR PostgreSQL Database error 23503 insert or update on table flow_run_state violates foreign key constraint fk_flow_run_state__result_artifact_id__artifact DETAIL Key (result_artifact_id)=(06294852-daca-4951-82ea-7b13eec748ef) is not present in table artifact QUERY ALTER TABLE public flow_run_state ADD CONSTRAINT fk_flow_run_state__result_artifact_id__artifact FOREIGN KEY (result_artifact_id) REFERENCES artifact(id) ON DELETE SET NULL 2025-10-09T17 56 15 938825-04 00 ERROR PostgreSQL Database error 23503 insert or update on table task_run_state violates foreign key constraint fk_task_run_state__task_run_id__task_run DETAIL Key (task_run_id)=(0198ce07-d4d5-7e18-ba79-88df356bb148) is not present in table task_run QUERY ALTER TABLE public task_run_state ADD CONSTRAINT fk_task_run_state__task_run_id__task_run FOREIGN KEY (task_run_id) REFERENCES task_run(id) ON DELETE CASCADE 2025-10-09T17 56 15 957289-04 00 ERROR PostgreSQL Database error 23503 insert or update on table task_run_state violates foreign key constraint fk_task_run_state__result_artifact_id__artifact DETAIL Key (result_artifact_id)=(e3ffd75a-ee63-4e35-a46f-7e8908b25b0d) is not present in table artifact QUERY ALTER TABLE public task_run_state ADD CONSTRAINT fk_task_run_state__result_artifact_id__artifact FOREIGN KEY (result_artifact_id) REFERENCES artifact(id) ON DELETE SET NULL 2025-10-09T17 56 16 085945-04 00 LOG report summary reset table name errors rows bytes total time ------------------------------ --------- --------- --------- -------------- fetch 0 0 0 000s fetch meta data 0 69 0 015s Drop Foreign Keys 0 66 0 014s ------------------------------ --------- --------- --------- -------------- alembic_version 1 0 0 013s concurrency_limit 0 0 0 008s task_run_state_cache 0 0 0 013s configuration 0 2 0 4 kB 0 020s block_schema_reference 0 27 4 7 kB 0 048s block_document 0 6 2 6 kB 0 067s worker 0 405 80 7 kB 0 102s task_run_state 0 241820 160 5 MB 8 555s flow 0 20 2 3 kB 0 002s saved_search 0 0 0 008s agent 0 0 0 015s block_schema 0 88 188 3 kB 0 057s block_document_reference 0 1 0 2 kB 0 064s block_type 0 58 47
t
@Marvin How do I roll back to sqlite?
m
thought for 44.3 seconds
Totally doable. To roll back to SQLite, just point the Prefect server back at your local SQLite file and restart. Your workers don’t need any changes. Option A — Use your original SQLite file (~/.prefect/prefect.db) 1) Stop the server
Copy code
prefect server stop || true
2) Point Prefect to the SQLite DB (use an absolute path with four slashes)
Copy code
export PREFECT_SERVER_DATABASE_CONNECTION_URL="sqlite+aiosqlite:////Users/talmolab/.prefect/prefect.db"
3) (Optional but safe) Clear Postgres‑specific pool env vars so they don’t linger
Copy code
unset PREFECT_SERVER_DATABASE_SQLALCHEMY_POOL_SIZE
unset PREFECT_SERVER_DATABASE_SQLALCHEMY_MAX_OVERFLOW
unset PREFECT_SERVER_DATABASE_SQLALCHEMY_POOL_TIMEOUT
unset PREFECT_SERVER_DATABASE_SQLALCHEMY_POOL_RECYCLE
unset PREFECT_SERVER_DATABASE_TIMEOUT
unset PREFECT_SERVER_DATABASE_CONNECTION_TIMEOUT
4) Ensure schema is up to date
Copy code
prefect server database upgrade
5) Start the server
Copy code
prefect server start --ui
Option B — Use the backup you created (~/.prefect/prefect.backup.db) If you want to roll back specifically to the backup snapshot: 1) Stop the server
Copy code
prefect server stop || true
2) Swap the backup in as the active DB
Copy code
cp -f "$HOME/.prefect/prefect.backup.db" "$HOME/.prefect/prefect.db"
3) Point Prefect to the SQLite DB and clear Postgres vars (same as above)
Copy code
export PREFECT_SERVER_DATABASE_CONNECTION_URL="sqlite+aiosqlite:////Users/talmolab/.prefect/prefect.db"

unset PREFECT_SERVER_DATABASE_SQLALCHEMY_POOL_SIZE
unset PREFECT_SERVER_DATABASE_SQLALCHEMY_MAX_OVERFLOW
unset PREFECT_SERVER_DATABASE_SQLALCHEMY_POOL_TIMEOUT
unset PREFECT_SERVER_DATABASE_SQLALCHEMY_POOL_RECYCLE
unset PREFECT_SERVER_DATABASE_TIMEOUT
unset PREFECT_SERVER_DATABASE_CONNECTION_TIMEOUT
4) Ensure schema is up to date, then start the server
Copy code
prefect server database upgrade
prefect server start --ui
Notes - The critical piece is the connection URL: use sqlite+aiosqlite and an absolute file path, e.g.
sqlite+aiosqlite:////Users/talmolab/.prefect/prefect.db
. - If you had set the Postgres URL in a profile file (~/.prefect/profiles.toml), either remove it there or override it with the environment variable above. - Workers don’t change. They will keep pointing at your same API URL (
PREFECT_API_URL
) and won’t care whether the server is backed by SQLite or Postgres. - If you spun up a Postgres container for testing, you can stop/remove it after rollback:
Copy code
docker stop prefect-postgres 2>/dev/null || true
docker rm prefect-postgres 2>/dev/null || true
If you want this rollback to be permanent across shells, add the SQLite connection URL export to your shell profile (e.g., ~/.zshrc) or your Prefect profile config.
s
@Tom Han, did you manage to get Prefect to write to a different schema?
t
@Simon Hamma Hi! The migration Mavrin suggested corrupted my database, so I just started a new database with no migration on Postgres. Perhaps ask one of the prefect devs?
s
@Tom Han Oh shit, for me the suggested solution of creating a new user and assigning a default schema worked. But I also started from scratch with Prefect so there was nothing to corrupt I guess.
t
@Simon Hamma Yeah I think it really depend on your use case, I had some occasion where there's a lot of paralell request that frozes sqlite so postgres was needed for my usecase