Dennis Hinnenkamp
09/23/2025, 1:37 PMMarvin
09/23/2025, 1:37 PMMarvin
09/23/2025, 1:40 PMPREFECT_API_DATABASE_CONNECTION_URL
- Example: postgresql+asyncpg://<user>:<password>@<host>:5432/<db>
- Migrations on startup:
- PREFECT_SERVER_DATABASE_MIGRATE_ON_START
controls this and defaults to true. If this was disabled or the DB was unreachable at startup, tables won’t be created.
Manually run migrations (Kubernetes)
Option A — exec into your API pod and run the CLI
1) Find your API pod
kubectl get pods -n <namespace>
2) Exec and run the upgrade
kubectl exec -it <api-pod-name> -n <namespace> -- prefect server database upgrade -y
- To preview without changes:
kubectl exec -it <api-pod-name> -n <namespace> -- prefect server database upgrade --dry-run
Option B — run a one-off Job (use the same image and env as your API)
apiVersion: batch/v1
kind: Job
metadata:
name: prefect-db-upgrade
namespace: <namespace>
spec:
template:
spec:
restartPolicy: Never
containers:
- name: db-upgrade
image: <your-prefect-server-image> # match your API image tag
env:
- name: PREFECT_API_DATABASE_CONNECTION_URL
valueFrom:
secretKeyRef:
name: <your-db-secret>
key: DATABASE_URL
- name: PREFECT_SERVER_DATABASE_MIGRATE_ON_START
value: "true"
command: ["prefect"]
args: ["server", "database", "upgrade", "-y"]
Apply:
kubectl apply -f prefect-db-upgrade.yaml
kubectl logs job/prefect-db-upgrade -n <namespace>
If tables are still missing
- Check API pod logs for migration errors (permissions, network, bad URL):
kubectl logs <api-pod-name> -n <namespace>
- As a last resort (destructive: drops all Prefect tables), you can reset:
kubectl exec -it <api-pod-name> -n <namespace> -- prefect server database reset -y
Only do this if you are okay losing Prefect metadata.
Docs and references
- Server CLI (includes database commands): Server CLI
- Helm (general guidance): Prefect Server Helm
- Settings reference (env vars): Settings reference
A couple of questions to pinpoint the issue
- Which database are you using (Postgres version)? Is the DB externally managed?
- How did you deploy the server (Helm, custom manifests)? Where are you setting the DB URL?
- Do you see any specific errors in the API pod logs (e.g., permission denied, cannot connect, alembic errors)?
Share any error snippets and I’ll tailor the next steps.Dennis Hinnenkamp
09/23/2025, 1:44 PMMarvin
09/23/2025, 1:44 PMMarvin
09/23/2025, 1:45 PMPREFECT_API_DATABASE_CONNECTION_URL
- Migrate-on-start is controlled by PREFECT_SERVER_DATABASE_MIGRATE_ON_START
(defaults to true)
What to do in ArgoCD
1) Ensure the API has the correct DB URL and migration-on-start enabled
- In your Deployment/StatefulSet manifest (or Helm values), set:
env:
- name: PREFECT_API_DATABASE_CONNECTION_URL
valueFrom:
secretKeyRef:
name: <your-db-secret>
key: DATABASE_URL
- name: PREFECT_SERVER_DATABASE_MIGRATE_ON_START
value: "true"
- Commit this change and let ArgoCD sync. If the server started previously without a valid DB URL or with migrations disabled, just correcting this and rolling the API will typically create the missing tables.
2) Add a PreSync migration Job (runs before each deploy)
- Add this Job to your repo. ArgoCD will run it on each sync before updating the API and clean it up after success.
apiVersion: batch/v1
kind: Job
metadata:
name: prefect-db-upgrade
namespace: <namespace>
annotations:
<http://argocd.argoproj.io/hook|argocd.argoproj.io/hook>: PreSync
<http://argocd.argoproj.io/hook-delete-policy|argocd.argoproj.io/hook-delete-policy>: HookSucceeded
spec:
template:
spec:
restartPolicy: Never
containers:
- name: db-upgrade
image: <your-prefect-server-image> # match your API image tag
env:
- name: PREFECT_API_DATABASE_CONNECTION_URL
valueFrom:
secretKeyRef:
name: <your-db-secret>
key: DATABASE_URL
command: ["prefect"]
args: ["server", "database", "upgrade", "-y"]
- Commit and let ArgoCD sync. In the ArgoCD UI, you can view the Job pod logs to confirm migrations ran.
3) Or, use an initContainer on the API (runs on each pod start)
- This guarantees migrations run before the API starts (good for fully GitOps flows).
- Add to your API Deployment/StatefulSet:
spec:
template:
spec:
initContainers:
- name: db-migrate
image: <your-prefect-server-image>
env:
- name: PREFECT_API_DATABASE_CONNECTION_URL
valueFrom:
secretKeyRef:
name: <your-db-secret>
key: DATABASE_URL
command: ["prefect"]
args: ["server", "database", "upgrade", "-y"]
containers:
- name: api
image: <your-prefect-server-image>
env:
- name: PREFECT_API_DATABASE_CONNECTION_URL
valueFrom:
secretKeyRef:
name: <your-db-secret>
key: DATABASE_URL
- name: PREFECT_SERVER_DATABASE_MIGRATE_ON_START
value: "true"
- Commit and sync. The API won’t start until migrations succeed.
Notes and tips
- Use the async Postgres driver in your DB URL: <postgresql+asyncpg://user:pass@host:5432/dbname>
- If you previously deployed with a bad DB URL, a redeploy with the corrected envs (or the initContainer/PreSync Job) will create any missing tables.
- Avoid using prefect server database reset -y
unless you’re okay dropping all Prefect metadata.
References
- Server CLI (database commands): Server CLI
- Settings reference (env vars): Settings reference
- Server Helm (if you’re using Helm via ArgoCD): Prefect Server Helm
If you can share whether you’re using Helm or raw manifests in ArgoCD (and what Prefect image tag you pin), I can give you an exact patch snippet for your setup.Dennis Hinnenkamp
09/23/2025, 1:50 PM025-09-23 13:34:59.557 GMT [452] ERROR: relation "deployment" does not exist at character 28
2025-09-23 13:34:59.557 GMT [452] STATEMENT: SELECT deployment.id
FROM deployment
WHERE deployment.paused IS NOT true AND deployment.updated >= $1::TIMESTAMP WITH TIME ZONE AND (EXISTS (SELECT deployment_schedule.deployment_id
FROM deployment_schedule
WHERE deployment_schedule.deployment_id = deployment.id AND deployment_schedule.active IS true)) ORDER BY deployment.id
LIMIT $2::INTEGER
2025-09-23 13:35:01.278 GMT [452] ERROR: relation "automation_bucket" does not exist at character 13
2025-09-23 13:35:01.278 GMT [452] STATEMENT: DELETE FROM automation_bucket WHERE automation_bucket."end" <= $1::TIMESTAMP WITH TIME ZONE
2025-09-23 13:35:03.121 GMT [452] ERROR: relation "flow_run" does not exist at character 748
2025-09-23 13:35:03.121 GMT [452] STATEMENT: SELECT flow_run.flow_id, flow_run.deployment_id, flow_run.work_queue_name, flow_run.flow_version, flow_run.deployment_version, flow_run.parameters, flow_run.idempotency_key, flow_run.context, flow_run.empirical_policy, flow_run.tags, flow_run.labels, flow_run.created_by, flow_run.infrastructure_pid, flow_run.job_variables, flow_run.infrastructure_document_id, flow_run.parent_task_run_id, flow_run.auto_scheduled, flow_run.state_id, flow_run.work_queue_id, flow_run.name, flow_run.state_type, flow_run.state_name, flow_run.state_timestamp, flow_run.run_count, flow_run.expected_start_time, flow_run.next_scheduled_start_time, flow_run.start_time, flow_run.end_time, flow_run.total_run_time, flow_run.id, flow_run.created, flow_run.updated
FROM flow_run
WHERE flow_run.state_type = $1::state_type AND flow_run.end_time IS NOT NULL AND flow_run.end_time >= $2::TIMESTAMP WITH TIME ZONE AND flow_run.id > $3::UUID ORDER BY flow_run.id
LIMIT $4::INTEGER
2025-09-23 13:35:03.647 GMT [452] ERROR: relation "flow_run" does not exist at character 62
2025-09-23 13:35:03.647 GMT [452] STATEMENT: SELECT flow_run.id, flow_run.next_scheduled_start_time
FROM flow_run
WHERE flow_run.next_scheduled_start_time <= $1::TIMESTAMP WITH TIME ZONE AND flow_run.state_type = $2::state_type AND flow_run.state_name = $3::VARCHAR
LIMIT $4::INTEGER
2025-09-23 13:35:03.647 GMT [453] ERROR: relation "flow_run" does not exist at character 748
2025-09-23 13:35:03.647 GMT [453] STATEMENT: SELECT flow_run.flow_id, flow_run.deployment_id, flow_run.work_queue_name, flow_run.flow_version, flow_run.deployment_version, flow_run.parameters, flow_run.idempotency_key, flow_run.context, flow_run.empirical_policy, flow_run.tags, flow_run.labels, flow_run.created_by, flow_run.infrastructure_pid, flow_run.job_variables, flow_run.infrastructure_document_id, flow_run.parent_task_run_id, flow_run.auto_scheduled, flow_run.state_id, flow_run.work_queue_id, flow_run.name, flow_run.state_type, flow_run.state_name, flow_run.state_timestamp, flow_run.run_count, flow_run.expected_start_time, flow_run.next_scheduled_start_time, flow_run.start_time, flow_run.end_time, flow_run.total_run_time, flow_run.id, flow_run.created, flow_run.updated
FROM flow_run
WHERE flow_run.state_type = $1::state_type
LIMIT $2::INTEGER
2025-09-23 13:35:04.557 GMT [453] ERROR: relation "deployment" does not exist at character 28
2025-09-23 13:35:04.557 GMT [453] STATEMENT: SELECT deployment.id
FROM deployment
WHERE deployment.paused IS NOT true AND deployment.updated >= $1::TIMESTAMP WITH TIME ZONE AND (EXISTS (SELECT deployment_schedule.deployment_id
FROM deployment_schedule
WHERE deployment_schedule.deployment_id = deployment.id AND deployment_schedule.active IS true)) ORDER BY deployment.id
LIMIT $2::INTEGER
2025-09-23 13:35:06.286 GMT [453] ERROR: relation "automation_bucket" does not exist at character 13
2025-09-23 13:35:06.286 GMT [453] STATEMENT: DELETE FROM automation_bucket WHERE automation_bucket."end" <= $1::TIMESTAMP WITH TIME ZONE
Marvin
09/23/2025, 1:50 PMMarvin
09/23/2025, 1:52 PMPREFECT_API_DATABASE_CONNECTION_URL
- Example: <postgresql+asyncpg://user:pass@host:5432/dbname>
- Migrate-on-start can help, but do not rely on it alone in GitOps; run an explicit migration step as shown below:
- Env var: PREFECT_SERVER_DATABASE_MIGRATE_ON_START
(defaults to true)
ArgoCD options
Option A — PreSync hook Job (recommended)
- This runs migrations before each sync, then is cleaned up on success.
apiVersion: batch/v1
kind: Job
metadata:
name: prefect-db-upgrade
namespace: <namespace>
annotations:
<http://argocd.argoproj.io/hook|argocd.argoproj.io/hook>: PreSync
<http://argocd.argoproj.io/hook-delete-policy|argocd.argoproj.io/hook-delete-policy>: HookSucceeded
spec:
template:
spec:
restartPolicy: Never
containers:
- name: db-upgrade
image: <your-prefect-server-image> # match the API image tag
env:
- name: PREFECT_API_DATABASE_CONNECTION_URL
valueFrom:
secretKeyRef:
name: <your-db-secret>
key: DATABASE_URL
command: ["prefect"]
args: ["server", "database", "upgrade", "-y"]
- Commit this manifest alongside your app and let ArgoCD sync. In the Argo UI, open the Job pod logs to confirm it applied all migrations.
Option B — API initContainer (ensures DB is migrated before API starts)
spec:
template:
spec:
initContainers:
- name: db-migrate
image: <your-prefect-server-image>
env:
- name: PREFECT_API_DATABASE_CONNECTION_URL
valueFrom:
secretKeyRef:
name: <your-db-secret>
key: DATABASE_URL
command: ["prefect"]
args: ["server", "database", "upgrade", "-y"]
containers:
- name: api
image: <your-prefect-server-image>
env:
- name: PREFECT_API_DATABASE_CONNECTION_URL
valueFrom:
secretKeyRef:
name: <your-db-secret>
key: DATABASE_URL
- name: PREFECT_SERVER_DATABASE_MIGRATE_ON_START
value: "true"
Common gotchas to check
- Use the async driver in the URL: postgresql+asyncpg://...
- The DB user needs privileges to CREATE/ALTER tables and create indexes.
- Point to the correct database (and default schema) — a typo will create a blank DB with no tables.
- Keep the migration Job/initContainer image tag in lockstep with the API image tag to avoid version drift.
Destructive options (only if you’re okay losing Prefect metadata)
- If you’ve ended up with a bad/partially initialized schema, you can add a one-time ArgoCD hook Job to run:
prefect server database reset -y
Then run the upgrade Job. Only do this if you intend to wipe Prefect’s tables.
References
- Server CLI (database commands): Server CLI
- Settings reference (env vars): Settings reference
- Helm (if you’re using Helm via ArgoCD): Prefect Server Helm
If you can share whether your ArgoCD app uses Helm or raw manifests (and the Prefect image tag you pin), I can give you an exact patch targeting your files.