<@ULVA73B9P> I deployed a self hosted prefect on g...
# ask-community
d
@Marvin I deployed a self hosted prefect on google cloud kubernetes. The server cant be setup and I got serveral postgres errors: prefect-server-postgresql-connection
m
thought for 230.0 seconds
d
@Marvin Resource not found in cluster: v1/Secret:prefect-server-postgresql
m
thought for 783.9 seconds
Thanks for the details — let’s get you unstuck. A few quick clarifications will help me zero in on the cause: - Which Prefect version are you running? (Assuming 3.x unless you say otherwise) - Did you deploy using the Prefect Helm chart? If so, can you share the relevant
values.yaml
snippets for the server and database config? - Are you using Cloud SQL for Postgres (private IP or Cloud SQL Auth Proxy), or another Postgres? - Can you paste the exact error lines from the failing pod(s) that mention “prefect-server-postgresql-connection”? In the meantime, here are the most common root causes and how to check them: 1) Verify the database connection env var is present in the server pods - Prefect 3.x expects
PREFECT_API_DATABASE_CONNECTION_URL
to be set to an async Postgres URL:
Copy code
<postgresql+asyncpg://USER:PASSWORD@HOST:PORT/DBNAME?sslmode=require>
If you use
postgresql://...
without
+asyncpg
, connections will fail. - Check that this env var is actually present in the API pod:
Copy code
kubectl -n <namespace> exec -it deploy/prefect-server-api -- printenv | grep PREFECT_API_DATABASE_CONNECTION_URL
- If you’re referencing a Kubernetes Secret, confirm the Secret name/key matches what your Deployment expects. The name “prefect-server-postgresql-connection” looks like a Secret name. Make sure it exists and the key inside matches the env var source. 2) Inspect the failing pods and events - Check the API and migration pods for Postgres errors:
Copy code
kubectl -n <namespace> logs deploy/prefect-server-api --tail=200
  kubectl -n <namespace> get pods
  kubectl -n <namespace> logs <prefect-server-migrations-pod-name> --tail=200
  kubectl -n <namespace> describe pod <prefect-server-api-pod>
  kubectl -n <namespace> get events --sort-by='.lastTimestamp'
- If migrations are failing, that usually points to bad credentials, missing permissions on the database, or an unreachable database host. 3) Cloud SQL specifics (if applicable) - If you’re using Cloud SQL: - With private IP: ensure your GKE nodes can reach the Cloud SQL private IP and firewall rules/network policies allow it. - With the Cloud SQL Auth Proxy: run it as a sidecar or DaemonSet and point the
HOST:PORT
in the connection URL to the proxy (usually
127.0.0.1:5432
) and include
sslmode=disable
or whatever is appropriate for the proxy configuration. - Test connectivity from the cluster with a temporary pod (replace placeholders):
Copy code
kubectl -n <namespace> run psql-test --rm -it --image=postgres:16 --restart=Never -- \
    psql "host=<HOST> port=<PORT> user=<USER> dbname=<DBNAME> sslmode=require"
If this fails, the issue is networking/credentials rather than Prefect. 4) Validate credentials and permissions - Ensure the Postgres user in the URL can connect to the DB and run migrations (CREATE/ALTER on the schema). - If you rotated passwords or changed secrets, make sure the Deployment picked up the change (roll the pods). 5) Confirm Helm values/Secret wiring (if using the Helm chart) - The chart should set
PREFECT_API_DATABASE_CONNECTION_URL
via values or a Secret. Double-check: - The Secret exists:
Copy code
kubectl -n <namespace> get secret prefect-server-postgresql-connection -o yaml
- The key name in the Secret matches the `envFrom`/`valueFrom` reference in the Deployment. - After fixes, restart pods:
Copy code
kubectl -n <namespace> rollout restart deploy/prefect-server-api
Helpful references: - Prefect 3: Deploy Prefect Server with Helm - Prefect 3: PREFECT_API_DATABASE_CONNECTION_URL setting - Prefect 3: Self-hosted server CLI If you can share: - The exact Postgres error lines - Your
values.yaml
DB section (with secrets redacted)
- Whether you’re using Cloud SQL (and if so, proxy vs private IP) …I can pinpoint the fix quickly and suggest the exact Helm values or manifests to correct it.
d
@Marvin • Resource not found in cluster: v1/Secret:prefect-server-postgresql • Resource not found in cluster: v1/Secret:prefect-server-postgresql-connection • Resource not found in cluster: v1/Service:prefect-server • Resource not found in cluster: v1/Service:prefect-server-postgresql • Resource not found in cluster: v1/Service:prefect-server-postgresql-hl • etc... I am using the latest helm chart
Copy code
postgresql:
  enabled: true
  primary:
    # Ephemeral PostgreSQL for dev/testing.
    persistence:
      enabled: false
    resources:
      # DB pod resources
      requests:
        cpu: 1
        memory: 1Gi
      limits:
        cpu: 1
        memory: 1Gi

ingress:
  enabled: true
  className: nginx-internal-traffic
  annotations:
    <http://cert-manager.io/cluster-issuer|cert-manager.io/cluster-issuer>: letsencrypt-internal-v2
    <http://acme.cert-manager.io/http01-ingress-class|acme.cert-manager.io/http01-ingress-class>: nginx-internal-traffic

    # OAuth2 Proxy
    <http://nginx.ingress.kubernetes.io/auth-url|nginx.ingress.kubernetes.io/auth-url>: "<http://oauth2-proxy.oauth2-proxy.svc.cluster.local/oauth2/auth>"
    <http://nginx.ingress.kubernetes.io/auth-signin|nginx.ingress.kubernetes.io/auth-signin>: "<https://oauth2.dglecom.net/oauth2/start?rd=https://$host$escaped_request_uri>"
    <http://nginx.ingress.kubernetes.io/auth-snippet|nginx.ingress.kubernetes.io/auth-snippet>: |
      proxy_set_header X-Forwarded-Host $host;
      proxy_set_header X-Forwarded-Uri $request_uri;
      proxy_set_header X-Forwarded-Proto $scheme;
      proxy_set_header X-Forwarded-For $remote_addr;
      proxy_set_header X-Forwarded-Method $request_method;
    <http://nginx.ingress.kubernetes.io/ssl-redirect|nginx.ingress.kubernetes.io/ssl-redirect>: "true"
    <http://nginx.ingress.kubernetes.io/configuration-snippet|nginx.ingress.kubernetes.io/configuration-snippet>: |
      if ($scheme = https) {
        add_header Strict-Transport-Security "max-age=0;";
      }
      proxy_intercept_errors off;

  host:
    hostname: <http://prefect.svc.dglecom.net|prefect.svc.dglecom.net>
    path: /
    pathType: Prefix

  tls: true
  extraTls:
    - hosts:
        - <http://prefect.svc.dglecom.net|prefect.svc.dglecom.net>
      secretName: prefect-origin-tls

  extraPaths:
    - path: /api
      backend:
        service:
          name: prefect-server
          port:
            number: 4200

server:
  uiConfig:
    # External URL of public API; informs the UI and remote clients.
    prefectUiApiUrl: "<https://prefect.svc.dglecom.net/api>"

  resources:
    # Prefect server container resources;
    requests:
      cpu: "1"
      memory: 1Gi
    limits:
      cpu: "1"
      memory: 1Gi
m
thought for 117.8 seconds
Thanks for sharing the values — that helps. The errors you’re seeing point to a naming mismatch: your cluster doesn’t have resources named “prefect-server-…”, but something (likely your Ingress or a values reference) expects those names. Key points to check/fix: 1) Verify the actual resource names created by your Helm release - Bitnami PostgreSQL (which the chart uses when postgresql.enabled=true) creates names like: - Secret: <release>-postgresql - Services: <release>-postgresql and <release>-postgresql-hl (or -primary depending on chart version) - Prefect Server Service is named off the Helm release as well. - Find your release and resources:
Copy code
helm list -n <namespace>
  kubectl -n <namespace> get svc,secret | grep -E 'prefect|postgresql'
  kubectl -n <namespace> get all -l <http://app.kubernetes.io/instance=<your-release-name|app.kubernetes.io/instance=<your-release-name>>
- If your release name is “prefect”, you’ll typically see “prefect-postgresql”, not “prefect-server-postgresql”. That would explain the “Resource not found” messages. 2) Fix your Ingress backend service name - You have:
Copy code
extraPaths:
    - path: /api
      backend:
        service:
          name: prefect-server
          port:
            number: 4200
- Confirm the Prefect API Service name actually matches “prefect-server”. If not, change it to the real service name you found in step 1. For example, if your release is “prefect” and the chart created a service named “prefect-server” or “prefect”, use that exact name. 3) Remove/replace references to non-existent Secrets - These errors suggest something is referencing Secrets named: - prefect-server-postgresql - prefect-server-postgresql-connection - With postgresql.enabled: true, you should NOT need a connection Secret for Prefect; the chart wires the DB automatically to the in-cluster Postgres. - Search your values for any fields like: - existingSecret, extraEnvFromSecret, extraEnvFrom, envFrom, or similar that reference “prefect-server-postgresql-connection” (or “prefect-server-postgresql”) and remove them. - If you do want to provide your own DB connection Secret (external Postgres), either: - Create the Secret your values reference:
Copy code
kubectl -n <namespace> create secret generic prefect-server-postgresql-connection \
      --from-literal=PREFECT_API_DATABASE_CONNECTION_URL='<postgresql+asyncpg://USER:PASSWORD@HOST:5432/DBNAME?sslmode=require>'
…and set postgresql.enabled: false, or - Update your values to reference the actual Secret name you created. 4) Confirm namespace and release name consistency - If you installed the chart with a different release name (e.g., “prefect” vs “prefect-server”), your resources will have different prefixes. Make sure any hard-coded names in your values (Ingress backends, Secret references) are updated to match your release name and namespace. 5) Reconcile and restart - After updating values, run a Helm upgrade with the same release name and namespace you used to install. - Then validate:
Copy code
kubectl -n <namespace> get svc,secret | grep -E 'prefect|postgresql'
  kubectl -n <namespace> describe ingress <your-ingress>
If you can share: - Your Helm release name and namespace - Output of:
Copy code
helm get values <release> -n <namespace>
  kubectl -n <namespace> get svc,secret | grep -E 'prefect|postgresql'
…I can tell you the exact service name to use in your Ingress and whether any Secret references need to be removed or renamed. References: - Prefect 3: Deploy Prefect Server with Helm - PREFECT_API_DATABASE_CONNECTION_URL setting
That error means your Prefect Server release is trying to reference a Kubernetes Secret named
prefect-server-postgresql
, but it doesn’t exist in the target namespace. What that secret is - It’s the PostgreSQL credentials Secret created by the chart-managed PostgreSQL subchart (Bitnami) when
postgresql.enabled
is true. - By default, its name is
<release-name>-postgresql
(e.g., if your Helm release is
prefect-server
, the secret is
prefect-server-postgresql
). - If you disable chart-managed PostgreSQL, the chart should not reference that secret; you must provide your own DB credentials/URL instead. Quick checks - Confirm namespace and existence:
Copy code
kubectl get secret prefect-server-postgresql -n <namespace>
- Check for any similar secrets (name override or different release name):
Copy code
kubectl get secrets -n <namespace> | grep -i postgresql
- Verify the Helm release name/namespace ArgoCD is using matches where your Prefect resources deploy. Common causes and fixes 1) You want the chart to manage PostgreSQL (recommended simplest path) - Ensure it’s enabled and let the subchart create the secret:
Copy code
postgresql:
    enabled: true
    # optionally set auth values; if not set, the subchart generates credentials and the Secret automatically
    # auth:
    #   username: prefect
    #   database: prefect
- If you previously set an existing secret name, either: - Remove it (let the subchart generate credentials/secret), or - Create the Secret it references (see next section). - ArgoCD tips: - Ensure Application.spec.destination.namespace matches where the release is installed. - Don’t split the Prefect app and its PostgreSQL subchart into different namespaces. - If you’re using a separate app to create a Secret (e.g., SealedSecret/ExternalSecret), give that app a lower sync wave so it applies before the Prefect app. 2) You want the chart to use a pre-existing Secret for the Bitnami PostgreSQL subchart - Configure the chart to point at your existing Secret:
Copy code
postgresql:
    enabled: true
    auth:
      existingSecret: prefect-server-postgresql
      # Depending on your Bitnami PostgreSQL version, you may also need to match specific key names in that Secret.
- Create the Secret before syncing the app (keys depend on the Bitnami chart version; check with your chart version):
Copy code
kubectl -n <namespace> create secret generic prefect-server-postgresql \
    --from-literal=postgres-password='<admin-password>' \
    --from-literal=password='<app-user-password>'
Notes: - The required keys and names vary by Bitnami chart version (e.g., postgres-password/password/replication-password). Run this to confirm what your version expects:
Copy code
helm show values <oci://registry-1.docker.io/bitnamicharts/postgresql> --version <your-version>
- If you’re not sure, share your
values.yaml
(postgresql section) and we’ll tailor the exact keys. 3) You’re using an external database and do NOT want the chart-managed PostgreSQL - Disable the subchart and stop referencing its Secret:
Copy code
postgresql:
    enabled: false
- Provide the database connection config to Prefect Server (typically via a Secret containing the connection URL and chart values that wire it to the API and migrations job). Exact keys differ by chart version; see the Helm docs and we can help map your values. - If you still see references to
<release>-postgresql
, something in your values is still toggling those templates on—share your values and we’ll pinpoint it. ArgoCD-specific nuances - Namespace mismatches are the most common cause: ArgoCD installs the release into namespace A, but a Secret was created/applied in namespace B. - If your Secret comes from another ArgoCD Application (e.g., SealedSecrets/ExternalSecrets operator), use sync-waves so the Secret manifests apply before Prefect Server.
- If you’re using nameOverride/fullnameOverride in Helm values, the generated Secret name will change accordingly. Ensure consumers reference the same, final name. If you can share: - Your Prefect Helm values (at least the
postgresql:
section and any DB/secret/env settings). - The ArgoCD Application manifest (destination.namespace and Helm values section). - The output of:
Copy code
kubectl get secrets -n <namespace> | grep -i postgresql
…I can give you an exact set of values and a Secret manifest to resolve it. Docs for reference - Prefect Server on Helm: docs-3.prefect.io/v3/advanced/server-helm - Secrets guidance: docs-3.prefect.io/v3/how-to-guides/configuration/store-secrets Summary of typical values snippets - Chart-managed PostgreSQL (auto-secret):
Copy code
postgresql:
    enabled: true
- Chart-managed PostgreSQL using your existing Secret:
Copy code
postgresql:
    enabled: true
    auth:
      existingSecret: prefect-server-postgresql
  # Make sure you create the Secret ahead of time with the keys your Bitnami PostgreSQL version expects.
- External DB (no chart-managed PostgreSQL):
Copy code
postgresql:
    enabled: false

  # Then point the Prefect API and migration job to your external DB via a Secret/env.
  # The exact keys/paths are chart-version-specific; share your values or follow the chart docs.
If you post your current values and namespace, I’ll help you adjust them precisely.