<@ULVA73B9P> I deployed the self hosted server ver...
# ask-community
d
@Marvin I deployed the self hosted server version on kubernetes and my database seems to have some trouble. How can I fix that? Seems like there are some tables missing.
m
thought for 184.5 seconds
Thanks for the details — missing tables usually means the DB migrations didn’t run (or ran against the wrong database). Assuming you’re on Prefect 3.x, here’s how to fix and double-check. Quick checks - Confirm the API pod has a valid DB URL: - Env var should be set to
PREFECT_API_DATABASE_CONNECTION_URL
- Example:
postgresql+asyncpg://<user>:<password>@<host>:5432/<db>
- Migrations on startup: -
PREFECT_SERVER_DATABASE_MIGRATE_ON_START
controls this and defaults to true. If this was disabled or the DB was unreachable at startup, tables won’t be created. Manually run migrations (Kubernetes) Option A — exec into your API pod and run the CLI 1) Find your API pod
Copy code
kubectl get pods -n <namespace>
2) Exec and run the upgrade
Copy code
kubectl exec -it <api-pod-name> -n <namespace> -- prefect server database upgrade -y
- To preview without changes:
Copy code
kubectl exec -it <api-pod-name> -n <namespace> -- prefect server database upgrade --dry-run
Option B — run a one-off Job (use the same image and env as your API)
Copy code
apiVersion: batch/v1
kind: Job
metadata:
  name: prefect-db-upgrade
  namespace: <namespace>
spec:
  template:
    spec:
      restartPolicy: Never
      containers:
        - name: db-upgrade
          image: <your-prefect-server-image>  # match your API image tag
          env:
            - name: PREFECT_API_DATABASE_CONNECTION_URL
              valueFrom:
                secretKeyRef:
                  name: <your-db-secret>
                  key: DATABASE_URL
            - name: PREFECT_SERVER_DATABASE_MIGRATE_ON_START
              value: "true"
          command: ["prefect"]
          args: ["server", "database", "upgrade", "-y"]
Apply:
Copy code
kubectl apply -f prefect-db-upgrade.yaml
kubectl logs job/prefect-db-upgrade -n <namespace>
If tables are still missing - Check API pod logs for migration errors (permissions, network, bad URL):
Copy code
kubectl logs <api-pod-name> -n <namespace>
- As a last resort (destructive: drops all Prefect tables), you can reset:
Copy code
kubectl exec -it <api-pod-name> -n <namespace> -- prefect server database reset -y
Only do this if you are okay losing Prefect metadata. Docs and references - Server CLI (includes database commands): Server CLI - Helm (general guidance): Prefect Server Helm - Settings reference (env vars): Settings reference A couple of questions to pinpoint the issue - Which database are you using (Postgres version)? Is the DB externally managed? - How did you deploy the server (Helm, custom manifests)? Where are you setting the DB URL? - Do you see any specific errors in the API pod logs (e.g., permission denied, cannot connect, alembic errors)? Share any error snippets and I’ll tailor the next steps.
d
@Marvin I am using argocd for deployments and do not have access to the cluster or pods via kubectl
m
thought for 73.7 seconds
Got it — since you’re deploying via ArgoCD and don’t have kubectl access, the easiest fixes are to make migrations part of your GitOps workflow so ArgoCD runs them for you. Assumptions - Prefect 3.x - Your DB URL is set via env var and reachable from the API pod - The correct DB env var is
PREFECT_API_DATABASE_CONNECTION_URL
- Migrate-on-start is controlled by
PREFECT_SERVER_DATABASE_MIGRATE_ON_START
(defaults to true) What to do in ArgoCD 1) Ensure the API has the correct DB URL and migration-on-start enabled - In your Deployment/StatefulSet manifest (or Helm values), set:
Copy code
env:
  - name: PREFECT_API_DATABASE_CONNECTION_URL
    valueFrom:
      secretKeyRef:
        name: <your-db-secret>
        key: DATABASE_URL
  - name: PREFECT_SERVER_DATABASE_MIGRATE_ON_START
    value: "true"
- Commit this change and let ArgoCD sync. If the server started previously without a valid DB URL or with migrations disabled, just correcting this and rolling the API will typically create the missing tables. 2) Add a PreSync migration Job (runs before each deploy) - Add this Job to your repo. ArgoCD will run it on each sync before updating the API and clean it up after success.
Copy code
apiVersion: batch/v1
kind: Job
metadata:
  name: prefect-db-upgrade
  namespace: <namespace>
  annotations:
    <http://argocd.argoproj.io/hook|argocd.argoproj.io/hook>: PreSync
    <http://argocd.argoproj.io/hook-delete-policy|argocd.argoproj.io/hook-delete-policy>: HookSucceeded
spec:
  template:
    spec:
      restartPolicy: Never
      containers:
        - name: db-upgrade
          image: <your-prefect-server-image> # match your API image tag
          env:
            - name: PREFECT_API_DATABASE_CONNECTION_URL
              valueFrom:
                secretKeyRef:
                  name: <your-db-secret>
                  key: DATABASE_URL
          command: ["prefect"]
          args: ["server", "database", "upgrade", "-y"]
- Commit and let ArgoCD sync. In the ArgoCD UI, you can view the Job pod logs to confirm migrations ran. 3) Or, use an initContainer on the API (runs on each pod start) - This guarantees migrations run before the API starts (good for fully GitOps flows). - Add to your API Deployment/StatefulSet:
Copy code
spec:
  template:
    spec:
      initContainers:
        - name: db-migrate
          image: <your-prefect-server-image>
          env:
            - name: PREFECT_API_DATABASE_CONNECTION_URL
              valueFrom:
                secretKeyRef:
                  name: <your-db-secret>
                  key: DATABASE_URL
          command: ["prefect"]
          args: ["server", "database", "upgrade", "-y"]
      containers:
        - name: api
          image: <your-prefect-server-image>
          env:
            - name: PREFECT_API_DATABASE_CONNECTION_URL
              valueFrom:
                secretKeyRef:
                  name: <your-db-secret>
                  key: DATABASE_URL
            - name: PREFECT_SERVER_DATABASE_MIGRATE_ON_START
              value: "true"
- Commit and sync. The API won’t start until migrations succeed. Notes and tips - Use the async Postgres driver in your DB URL:
<postgresql+asyncpg://user:pass@host:5432/dbname>
- If you previously deployed with a bad DB URL, a redeploy with the corrected envs (or the initContainer/PreSync Job) will create any missing tables. - Avoid using
prefect server database reset -y
unless you’re okay dropping all Prefect metadata. References - Server CLI (database commands): Server CLI - Settings reference (env vars): Settings reference - Server Helm (if you’re using Helm via ArgoCD): Prefect Server Helm If you can share whether you’re using Helm or raw manifests in ArgoCD (and what Prefect image tag you pin), I can give you an exact patch snippet for your setup.
d
@Marvin
Copy code
025-09-23 13:34:59.557 GMT [452] ERROR:  relation "deployment" does not exist at character 28
2025-09-23 13:34:59.557 GMT [452] STATEMENT:  SELECT deployment.id
	FROM deployment
	WHERE deployment.paused IS NOT true AND deployment.updated >= $1::TIMESTAMP WITH TIME ZONE AND (EXISTS (SELECT deployment_schedule.deployment_id
	FROM deployment_schedule
	WHERE deployment_schedule.deployment_id = deployment.id AND deployment_schedule.active IS true)) ORDER BY deployment.id
	 LIMIT $2::INTEGER
2025-09-23 13:35:01.278 GMT [452] ERROR:  relation "automation_bucket" does not exist at character 13
2025-09-23 13:35:01.278 GMT [452] STATEMENT:  DELETE FROM automation_bucket WHERE automation_bucket."end" <= $1::TIMESTAMP WITH TIME ZONE
2025-09-23 13:35:03.121 GMT [452] ERROR:  relation "flow_run" does not exist at character 748
2025-09-23 13:35:03.121 GMT [452] STATEMENT:  SELECT flow_run.flow_id, flow_run.deployment_id, flow_run.work_queue_name, flow_run.flow_version, flow_run.deployment_version, flow_run.parameters, flow_run.idempotency_key, flow_run.context, flow_run.empirical_policy, flow_run.tags, flow_run.labels, flow_run.created_by, flow_run.infrastructure_pid, flow_run.job_variables, flow_run.infrastructure_document_id, flow_run.parent_task_run_id, flow_run.auto_scheduled, flow_run.state_id, flow_run.work_queue_id, flow_run.name, flow_run.state_type, flow_run.state_name, flow_run.state_timestamp, flow_run.run_count, flow_run.expected_start_time, flow_run.next_scheduled_start_time, flow_run.start_time, flow_run.end_time, flow_run.total_run_time, flow_run.id, flow_run.created, flow_run.updated
	FROM flow_run
	WHERE flow_run.state_type = $1::state_type AND flow_run.end_time IS NOT NULL AND flow_run.end_time >= $2::TIMESTAMP WITH TIME ZONE AND flow_run.id > $3::UUID ORDER BY flow_run.id
	 LIMIT $4::INTEGER
2025-09-23 13:35:03.647 GMT [452] ERROR:  relation "flow_run" does not exist at character 62
2025-09-23 13:35:03.647 GMT [452] STATEMENT:  SELECT flow_run.id, flow_run.next_scheduled_start_time
	FROM flow_run
	WHERE flow_run.next_scheduled_start_time <= $1::TIMESTAMP WITH TIME ZONE AND flow_run.state_type = $2::state_type AND flow_run.state_name = $3::VARCHAR
	 LIMIT $4::INTEGER
2025-09-23 13:35:03.647 GMT [453] ERROR:  relation "flow_run" does not exist at character 748
2025-09-23 13:35:03.647 GMT [453] STATEMENT:  SELECT flow_run.flow_id, flow_run.deployment_id, flow_run.work_queue_name, flow_run.flow_version, flow_run.deployment_version, flow_run.parameters, flow_run.idempotency_key, flow_run.context, flow_run.empirical_policy, flow_run.tags, flow_run.labels, flow_run.created_by, flow_run.infrastructure_pid, flow_run.job_variables, flow_run.infrastructure_document_id, flow_run.parent_task_run_id, flow_run.auto_scheduled, flow_run.state_id, flow_run.work_queue_id, flow_run.name, flow_run.state_type, flow_run.state_name, flow_run.state_timestamp, flow_run.run_count, flow_run.expected_start_time, flow_run.next_scheduled_start_time, flow_run.start_time, flow_run.end_time, flow_run.total_run_time, flow_run.id, flow_run.created, flow_run.updated
	FROM flow_run
	WHERE flow_run.state_type = $1::state_type
	 LIMIT $2::INTEGER
2025-09-23 13:35:04.557 GMT [453] ERROR:  relation "deployment" does not exist at character 28
2025-09-23 13:35:04.557 GMT [453] STATEMENT:  SELECT deployment.id
	FROM deployment
	WHERE deployment.paused IS NOT true AND deployment.updated >= $1::TIMESTAMP WITH TIME ZONE AND (EXISTS (SELECT deployment_schedule.deployment_id
	FROM deployment_schedule
	WHERE deployment_schedule.deployment_id = deployment.id AND deployment_schedule.active IS true)) ORDER BY deployment.id
	 LIMIT $2::INTEGER
2025-09-23 13:35:06.286 GMT [453] ERROR:  relation "automation_bucket" does not exist at character 13
2025-09-23 13:35:06.286 GMT [453] STATEMENT:  DELETE FROM automation_bucket WHERE automation_bucket."end" <= $1::TIMESTAMP WITH TIME ZONE
m
thought for 92.5 seconds
Thanks for the log snippet — those Postgres errors (missing relations for deployment, flow_run, automation_bucket) confirm the Prefect DB migrations did not run against your target database. Since you’re using ArgoCD (no kubectl), the fix is to bake migrations into your GitOps workflow so they run during sync. Make sure these are true first - The DB URL env var is set on anything running Prefect Server: - Env var:
PREFECT_API_DATABASE_CONNECTION_URL
- Example:
<postgresql+asyncpg://user:pass@host:5432/dbname>
- Migrate-on-start can help, but do not rely on it alone in GitOps; run an explicit migration step as shown below: - Env var:
PREFECT_SERVER_DATABASE_MIGRATE_ON_START
(defaults to true) ArgoCD options Option A — PreSync hook Job (recommended) - This runs migrations before each sync, then is cleaned up on success.
Copy code
apiVersion: batch/v1
kind: Job
metadata:
  name: prefect-db-upgrade
  namespace: <namespace>
  annotations:
    <http://argocd.argoproj.io/hook|argocd.argoproj.io/hook>: PreSync
    <http://argocd.argoproj.io/hook-delete-policy|argocd.argoproj.io/hook-delete-policy>: HookSucceeded
spec:
  template:
    spec:
      restartPolicy: Never
      containers:
        - name: db-upgrade
          image: <your-prefect-server-image>   # match the API image tag
          env:
            - name: PREFECT_API_DATABASE_CONNECTION_URL
              valueFrom:
                secretKeyRef:
                  name: <your-db-secret>
                  key: DATABASE_URL
          command: ["prefect"]
          args: ["server", "database", "upgrade", "-y"]
- Commit this manifest alongside your app and let ArgoCD sync. In the Argo UI, open the Job pod logs to confirm it applied all migrations. Option B — API initContainer (ensures DB is migrated before API starts)
Copy code
spec:
  template:
    spec:
      initContainers:
        - name: db-migrate
          image: <your-prefect-server-image>
          env:
            - name: PREFECT_API_DATABASE_CONNECTION_URL
              valueFrom:
                secretKeyRef:
                  name: <your-db-secret>
                  key: DATABASE_URL
          command: ["prefect"]
          args: ["server", "database", "upgrade", "-y"]
      containers:
        - name: api
          image: <your-prefect-server-image>
          env:
            - name: PREFECT_API_DATABASE_CONNECTION_URL
              valueFrom:
                secretKeyRef:
                  name: <your-db-secret>
                  key: DATABASE_URL
            - name: PREFECT_SERVER_DATABASE_MIGRATE_ON_START
              value: "true"
Common gotchas to check - Use the async driver in the URL:
postgresql+asyncpg://...
- The DB user needs privileges to CREATE/ALTER tables and create indexes. - Point to the correct database (and default schema) — a typo will create a blank DB with no tables. - Keep the migration Job/initContainer image tag in lockstep with the API image tag to avoid version drift. Destructive options (only if you’re okay losing Prefect metadata) - If you’ve ended up with a bad/partially initialized schema, you can add a one-time ArgoCD hook Job to run:
Copy code
prefect server database reset -y
Then run the upgrade Job. Only do this if you intend to wipe Prefect’s tables. References - Server CLI (database commands): Server CLI - Settings reference (env vars): Settings reference - Helm (if you’re using Helm via ArgoCD): Prefect Server Helm If you can share whether your ArgoCD app uses Helm or raw manifests (and the Prefect image tag you pin), I can give you an exact patch targeting your files.