<@ULVA73B9P> I am using helm to deploy a prefect s...
# ask-marvin
h
@Marvin I am using helm to deploy a prefect server on kubernetes. I am using the bitnami postgresql image to also deploy a postgres database. How can i add additional configuration for the postgres within the deployment such as max_wal_size, min_wal_size, shared_buffers?
m
thought for 37.7 seconds
Great question — this is configured on the Bitnami PostgreSQL Helm chart (not in the Prefect chart). You can set PostgreSQL tunables like
max_wal_size
,
min_wal_size
, and
shared_buffers
directly in your Postgres release’s values. Depending on the Bitnami chart version, you’ll use one of these patterns: - Newer Bitnami PostgreSQL charts (primary/replica split): values.yaml
Copy code
primary:
    extendedConfiguration: |
      shared_buffers = '2GB'
      max_wal_size = '4GB'
      min_wal_size = '1GB'
- Older Bitnami PostgreSQL charts: values.yaml
Copy code
postgresqlExtendedConf:
    shared_buffers: '2GB'
    max_wal_size: '4GB'
    min_wal_size: '1GB'
Apply with your normal Helm install/upgrade:
Copy code
helm upgrade --install my-postgres bitnami/postgresql -n <namespace> -f values.yaml
Notes: - Make sure your Postgres pod has enough memory for
shared_buffers
(configure
primary.resources
accordingly). - Use MB/GB units that Postgres accepts (e.g.,
512MB
,
2GB
) and quote values to avoid YAML type issues. - After changing these, the Postgres pods will restart to pick up the new config. Docs/References: - Bitnami PostgreSQL chart README and values: bitnami/charts: postgresql - Artifact Hub (values reference): Artifact Hub: bitnami/postgresql If you share your Bitnami chart version (e.g.,
helm show chart bitnami/postgresql | grep version
) or your current
values.yaml
, I can tailor the exact fields for your setup.
h
@Marvin how would i do that if i have only the prefect server and prefect worker helm charts installed which then are somewhat responsible for spinning up the postgres database? This is my server helm deployment where i tried to pass configuration for the postgres:
Copy code
commonAnnotations: {}
commonLabels: {}
fullnameOverride: prefect-server
ingress:
  annotations: {}
  className: ''
  enabled: false
  extraHosts: null
  extraPaths: null
  extraRules: null
  extraTls: null
  host:
    hostname: prefect.local
    path: /
    pathType: ImplementationSpecific
  selfSigned: false
  servicePort: server-svc-port
  tls: false
nameOverride: ''
namespaceOverride: ''
postgresql:
  auth:
    database: server
    enablePostgresUser: false
    password: mysecurepassword
    username: prefect-admin
  enabled: true
  image:
    tag: 14.3.0
  primary:
    initdb:
      user: postgres
    persistence:
      enabled: true
    postgresqlConfiguration:
      checkpoint_completion_target: '0.9'
      checkpoint_timeout: 15min
      max_wal_size: 4GB
      min_wal_size: 1GB
      shared_buffers: 512MB
      statement_timeout: 30min
secret:
  create: true
  database: ''
  host: ''
  name: ''
  password: ''
  port: ''
  username: ''
server:
  affinity: {}
  autoscaling:
    enabled: false
    maxReplicas: 100
    minReplicas: 1
    targetCPU: 80
    targetMemory: 80
  containerSecurityContext:
    allowPrivilegeEscalation: false
    capabilities: {}
    readOnlyRootFilesystem: true
    runAsNonRoot: true
    runAsUser: 1001
  debug: false
  env:
    - name: PREFECT_EVENTS_RETENTION_PERIOD
      value: 3d
  extraArgs: null
  extraContainers: null
  extraEnvVarsCM: ''
  extraEnvVarsSecret: ''
  extraVolumeMounts: null
  extraVolumes: null
  image:
    prefectTag: 3.0.10-python3.11-kubernetes
    pullPolicy: IfNotPresent
    pullSecrets:
      - gitlab-token-auth
    repository: prefecthq/prefect
  livenessProbe:
    config:
      failureThreshold: 3
      initialDelaySeconds: 10
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 5
    enabled: false
  loggingLevel: WARNING
  nodeSelector: {}
  podAnnotations: {}
  podLabels: {}
  podSecurityContext:
    fsGroup: 1001
    runAsNonRoot: true
    runAsUser: 1001
  prefectApiHost: 0.0.0.0
  prefectApiUrl: <http://localhost:4200/api>
  priorityClassName: ''
  readinessProbe:
    config:
      failureThreshold: 3
      initialDelaySeconds: 10
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 5
    enabled: false
  replicaCount: 1
  resources:
    limits:
      cpu: '1'
      memory: 1Gi
    requests:
      cpu: 500m
      memory: 512Mi
  revisionHistoryLimit: 10
  tolerations: null
  uiConfig:
    enabled: true
    prefectUiApiUrl: ''
    prefectUiStaticDirectory: /ui_build
    prefectUiUrl: ''
service:
  annotations: {}
  clusterIP: ''
  externalTrafficPolicy: Cluster
  extraPorts: []
  nodePort: ''
  port: 4200
  targetPort: 4200
  type: ClusterIP
serviceAccount:
  annotations: {}
  create: true
  name: prefect-server
global:
  cattle:
    systemProjectId: p-kmh5z
m
thought for 56.2 seconds
Thanks for sharing your values — that helps a lot. Short answer: the Prefect Server chart vendors the Bitnami PostgreSQL chart as a subchart, so you pass Postgres tuning via the
postgresql
block in your Prefect Server values. The Worker chart does not create a database; only the Server chart does. The key you used (
primary.postgresqlConfiguration
) isn’t one that Bitnami’s PostgreSQL chart reads. Use one of these (which one depends on the Bitnami chart version that your Prefect Server chart depends on): - Newer Bitnami chart (recommended):
primary.extendedConfiguration
(multiline string of postgresql.conf entries) - Older Bitnami chart:
postgresqlExtendedConf
(map of settings) Here’s your values adapted to both styles — pick the one that matches your Bitnami version. Newer Bitnami (primary.extendedConfiguration)
Copy code
postgresql:
  enabled: true
  auth:
    enablePostgresUser: false
    username: prefect-admin
    password: mysecurepassword
    database: server
  image:
    tag: 14.3.0
  primary:
    initdb:
      user: postgres
    persistence:
      enabled: true
    # Postgres tunables
    extendedConfiguration: |
      checkpoint_completion_target = '0.9'
      checkpoint_timeout = '15min'
      max_wal_size = '4GB'
      min_wal_size = '1GB'
      shared_buffers = '512MB'
      statement_timeout = '30min'
    # Ensure memory resources are compatible with shared_buffers
    resources:
      requests:
        cpu: "250m"
        memory: "1Gi"
      limits:
        cpu: "1"
        memory: "2Gi"
Older Bitnami (postgresqlExtendedConf)
Copy code
postgresql:
  enabled: true
  auth:
    enablePostgresUser: false
    username: prefect-admin
    password: mysecurepassword
    database: server
  image:
    tag: 14.3.0
  primary:
    initdb:
      user: postgres
    persistence:
      enabled: true
  # Postgres tunables
  postgresqlExtendedConf:
    checkpoint_completion_target: '0.9'
    checkpoint_timeout: '15min'
    max_wal_size: '4GB'
    min_wal_size: '1GB'
    shared_buffers: '512MB'
    statement_timeout: '30min'
How to tell which one you need - Check the Bitnami PostgreSQL dependency version in the Prefect Server chart you’re running, then consult its README/values: - Prefect Helm charts: Prefect Helm: prefect-server - Bitnami PostgreSQL chart: Bitnami Helm: postgresql - If you have the chart version handy, you can also
helm show values
for that Bitnami version and search for
extendedConfiguration
vs
postgresqlExtendedConf
. A couple of tips - Quote sizes and durations (e.g.,
'4GB'
,
'15min'
) to avoid YAML typing issues. - Make sure the pod’s memory requests/limits support your
shared_buffers
choice. - After applying, the Postgres pod will restart. You can verify the settings from inside the pod:
Copy code
kubectl exec -n <ns> -it <postgres-pod> -- psql -U <user> -d <db> -c "show shared_buffers; show max_wal_size; show min_wal_size; show statement_timeout;"
- Only the Server chart manages Postgres; you don’t need to (and can’t) configure a DB in the Worker chart. If you share your Prefect Server chart version (and I can infer the Bitnami dependency), I can tell you exactly which field your setup expects.
h
@Marvin this is my image version bitnami/postgresql:14.3.0
m
thought for 49.5 seconds
Thanks — the
bitnami/postgresql:14.3.0
you’re setting is the Docker image tag; the config keys you can use are determined by the Bitnami PostgreSQL Helm chart version that the Prefect Server chart depends on. In the Prefect Server chart, Postgres is created only by the Server chart’s
postgresql
subchart. To pass tunables like
max_wal_size
, use the Bitnami chart’s supported keys. In most recent versions, that’s
primary.extendedConfiguration
(multiline string). Older versions use
postgresqlExtendedConf
(map). Try this first (most likely to work):
Copy code
postgresql:
  enabled: true
  auth:
    enablePostgresUser: false
    username: prefect-admin
    password: mysecurepassword
    database: server
  image:
    tag: 14.3.0
  primary:
    initdb:
      user: postgres
    persistence:
      enabled: true
    extendedConfiguration: |
      checkpoint_completion_target = '0.9'
      checkpoint_timeout = '15min'
      max_wal_size = '4GB'
      min_wal_size = '1GB'
      shared_buffers = '512MB'
      statement_timeout = '30min'
If your Bitnami subchart is older and doesn’t support
extendedConfiguration
, use:
Copy code
postgresql:
  enabled: true
  auth:
    enablePostgresUser: false
    username: prefect-admin
    password: mysecurepassword
    database: server
  image:
    tag: 14.3.0
  postgresqlExtendedConf:
    checkpoint_completion_target: '0.9'
    checkpoint_timeout: '15min'
    max_wal_size: '4GB'
    min_wal_size: '1GB'
    shared_buffers: '512MB'
    statement_timeout: '30min'
  primary:
    initdb:
      user: postgres
    persistence:
      enabled: true
Apply as usual:
Copy code
helm upgrade --install prefect-server prefecthq/prefect-server -n <ns> -f values.yaml
Verify the settings in the running pod:
Copy code
kubectl exec -n <ns> -it <postgres-pod> -- psql -U prefect-admin -d server -c "show shared_buffers; show max_wal_size; show min_wal_size; show statement_timeout;"
How to tell which key your setup supports - Check the Bitnami subchart version your Prefect Server release uses, then look up its README/values: - Prefect Server chart: Prefect Helm: prefect-server - Bitnami PostgreSQL chart: Bitnami Helm: postgresql - Quick checks you can run: - Show the Bitnami values your release knows about:
Copy code
helm show values bitnami/postgresql | grep -n "extendedConfiguration\|postgresqlExtendedConf"
- Or inspect the installed release to find the Bitnami chart annotation:
Copy code
helm get manifest <server-release> -n <ns> | grep -m1 "chart: postgresql-"
Notes - The Worker chart does not provision a database; only the Server chart’s
postgresql.enabled: true
does. - Keep values quoted (e.g., '4GB', '15min') to avoid YAML typing issues. - Ensure the Postgres pod has enough memory for your
shared_buffers
choice (set
postgresql.primary.resources
accordingly). If you can share your Prefect Server chart version (and/or the Bitnami subchart version from the last command), I can confirm exactly which key your release expects.
h
@Marvin thanks!
Copy code
extendedConfiguration: |
      checkpoint_completion_target = '0.9'
      checkpoint_timeout = '15min'
      max_wal_size = '4GB'
      min_wal_size = '1GB'
      shared_buffers = '512MB'
      statement_timeout = '30min'
this worked
m
thought for 24.7 seconds
Awesome — glad
primary.extendedConfiguration
worked! A few quick follow-ups you may find useful: - Verify the live settings in the pod
Copy code
kubectl exec -n <ns> -it <postgres-pod> -- psql -U prefect-admin -d server -c "
    SHOW shared_buffers;
    SHOW max_wal_size;
    SHOW min_wal_size;
    SHOW checkpoint_timeout;
    SHOW statement_timeout;"
Or, for a full view:
Copy code
kubectl exec -n <ns> -it <postgres-pod> -- psql -U prefect-admin -d server -c "
    SELECT name, setting, unit, context, source FROM pg_settings
    WHERE name IN ('shared_buffers','max_wal_size','min_wal_size','checkpoint_timeout','statement_timeout')
    ORDER BY name;"
- Make sure resources match your settings (especially
shared_buffers
). You can set:
Copy code
postgresql:
    primary:
      resources:
        requests:
          cpu: "250m"
          memory: "1Gi"
        limits:
          cpu: "1"
          memory: "2Gi"
- Watch disk usage when increasing WAL sizes; ensure the PVC has enough headroom:
Copy code
postgresql:
    primary:
      persistence:
        enabled: true
        size: 20Gi   # or larger based on your WAL/data needs
- If you prefer to pass config inline at upgrade time:
Copy code
helm upgrade --install prefect-server prefecthq/prefect-server -n <ns> \
    --set postgresql.primary.extendedConfiguration="shared_buffers = '512MB'\nmax_wal_size = '4GB'\nmin_wal_size = '1GB'"
References - Prefect Server Helm chart: Prefect Helm: prefect-server - Bitnami PostgreSQL chart (values and config options): Bitnami Helm: postgresql If you want, share your expected workload (flow volume/concurrency), and I can suggest a starting point for Postgres and Prefect API database pool sizing.
h
@Marvin why am i seeing this all over my postgres logs?
Copy code
2025-09-05 17:17:39.517 GMT [30589] ERROR:  canceling statement due to user request
2025-09-05 17:17:39.517 GMT [30589] STATEMENT:  DELETE FROM events WHERE events.occurred < $1::TIMESTAMP WITH TIME ZONE
2025-09-05 17:32:49.604 GMT [33113] ERROR:  canceling statement due to user request
2025-09-05 17:32:49.604 GMT [33113] STATEMENT:  DELETE FROM events WHERE events.occurred < $1::TIMESTAMP WITH TIME ZONE
2025-09-05 17:47:59.706 GMT [35513] ERROR:  canceling statement due to user request
2025-09-05 17:47:59.706 GMT [35513] STATEMENT:  DELETE FROM events WHERE events.occurred < $1::TIMESTAMP WITH TIME ZONE
2025-09-05 18:03:09.787 GMT [38281] ERROR:  canceling statement due to user request
2025-09-05 18:03:09.787 GMT [38281] STATEMENT:  DELETE FROM events WHERE events.occurred < $1::TIMESTAMP WITH TIME ZONE
2025-09-05 18:18:19.868 GMT [40700] ERROR:  canceling statement due to user request
2025-09-05 18:18:19.868 GMT [40700] STATEMENT:  DELETE FROM events WHERE events.occurred < $1::TIMESTAMP WITH TIME ZONE
2025-09-05 18:33:29.948 GMT [43316] ERROR:  canceling statement due to user request
2025-09-05 18:33:29.948 GMT [43316] STATEMENT:  DELETE FROM events WHERE events.occurred < $1::TIMESTAMP WITH TIME ZONE
m
thought for 394.3 seconds
Short answer: those are Prefect Server’s background “events cleanup” deletes getting cancelled mid-run. What’s happening - Prefect Server persists orchestration “events” in the
events
table and runs a periodic pruning job to delete old rows. - The job runs roughly on a fixed cadence (your timestamps show ~15 minutes) and issues
Copy code
DELETE FROM events WHERE events.occurred < <retention_cutoff>
- If the delete runs long (e.g., many rows to prune, slow disk, VACUUM needed), the service cancels the statement when its time budget ends. Postgres logs this as:
Copy code
ERROR:  canceling statement due to user request
which is a client-initiated cancel, not a server
statement_timeout
. This is safe to ignore functionally, but it can be noisy if every cycle cancels. How to reduce the log spam - Make the delete cheaper: - Ensure there’s an index on `events(occurred)`:
Copy code
CREATE INDEX IF NOT EXISTS idx_events_occurred ON events (occurred);
- Run regular maintenance:
Copy code
VACUUM (VERBOSE, ANALYZE) events;
- Let it delete for longer before it gets cancelled: - Increase Prefect Server’s DB operation timeout (used by the server’s async Postgres engine). For example:
Copy code
export PREFECT_SERVER_DATABASE_TIMEOUT=60
Then restart the server. - Reduce the size of each delete batch so each statement finishes faster (fewer rows per DELETE). Look for the server service settings for the Event Persister and lower the delete batch size. For example, in environment variables:
Copy code
# example values – tune to your DB
  export PREFECT_SERVER_SERVICES_EVENT_PERSISTER_DELETE_BATCH_SIZE=2000
- Optionally adjust event retention so there are fewer rows to prune each cycle (shorter retention = smaller deletes). Check your server events settings and set a shorter retention window if that fits your needs. Find the exact knobs in your install - You can list your active settings and defaults, then search for the relevant keys:
Copy code
prefect config view --show-defaults
Look for: -
server.database.timeout
-
server.services.event_persister.*
(e.g., delete batch size) -
server.events.*
(retention) References - Prefect 3 database maintenance tips: Database maintenance - Event persister service (API reference): Event persister API If you can share: - Your Prefect version - Approximate size of the
events
table and retention goal I can suggest concrete values for the timeout/batch size and a maintenance plan tailored to your setup.
h
@Marvin this is my output for
prefect config view --show-defaults
Copy code
<http://localhost:4200>
PREFECT_PROFILE='ephemeral'
PREFECT_API_BLOCKS_REGISTER_ON_START='True' (from defaults)
PREFECT_API_DATABASE_CONNECTION_TIMEOUT='5.0' (from defaults)
PREFECT_API_DATABASE_CONNECTION_URL='********' (from env)
PREFECT_API_DATABASE_DRIVER='None' (from defaults)
PREFECT_API_DATABASE_ECHO='False' (from defaults)
PREFECT_API_DATABASE_HOST='None' (from defaults)
PREFECT_API_DATABASE_MIGRATE_ON_START='True' (from defaults)
PREFECT_API_DATABASE_NAME='None' (from defaults)
PREFECT_API_DATABASE_PASSWORD='********' (from defaults)
PREFECT_API_DATABASE_PORT='None' (from defaults)
PREFECT_API_DATABASE_TIMEOUT='10.0' (from defaults)
PREFECT_API_DATABASE_USER='None' (from defaults)
PREFECT_API_DEFAULT_LIMIT='200' (from defaults)
PREFECT_API_ENABLE_HTTP2='False' (from defaults)
PREFECT_API_ENABLE_METRICS='False' (from defaults)
PREFECT_API_EVENTS_RELATED_RESOURCE_CACHE_TTL='0:05:00' (from defaults)
PREFECT_API_EVENTS_STREAM_OUT_ENABLED='True' (from defaults)
PREFECT_API_KEY='********' (from defaults)
PREFECT_API_LOG_RETRYABLE_ERRORS='False' (from defaults)
PREFECT_API_MAX_FLOW_RUN_GRAPH_ARTIFACTS='10000' (from defaults)
PREFECT_API_MAX_FLOW_RUN_GRAPH_NODES='10000' (from defaults)
PREFECT_API_REQUEST_TIMEOUT='60.0' (from defaults)
PREFECT_API_SERVICES_CANCELLATION_CLEANUP_ENABLED='True' (from defaults)
PREFECT_API_SERVICES_CANCELLATION_CLEANUP_LOOP_SECONDS='20.0' (from defaults)
PREFECT_API_SERVICES_EVENT_PERSISTER_BATCH_SIZE='20' (from defaults)
PREFECT_API_SERVICES_EVENT_PERSISTER_ENABLED='True' (from defaults)
PREFECT_API_SERVICES_EVENT_PERSISTER_FLUSH_INTERVAL='5.0' (from defaults)
PREFECT_API_SERVICES_FLOW_RUN_NOTIFICATIONS_ENABLED='True' (from defaults)
PREFECT_API_SERVICES_FOREMAN_DEPLOYMENT_LAST_POLLED_TIMEOUT_SECONDS='60' (from defaults)
PREFECT_API_SERVICES_FOREMAN_ENABLED='True' (from defaults)
PREFECT_API_SERVICES_FOREMAN_FALLBACK_HEARTBEAT_INTERVAL_SECONDS='30' (from defaults)
PREFECT_API_SERVICES_FOREMAN_INACTIVITY_HEARTBEAT_MULTIPLE='3' (from defaults)
PREFECT_API_SERVICES_FOREMAN_LOOP_SECONDS='15.0' (from defaults)
PREFECT_API_SERVICES_FOREMAN_WORK_QUEUE_LAST_POLLED_TIMEOUT_SECONDS='60' (from defaults)
PREFECT_API_SERVICES_LATE_RUNS_AFTER_SECONDS='0:00:15' (from defaults)
PREFECT_API_SERVICES_LATE_RUNS_ENABLED='True' (from defaults)
PREFECT_API_SERVICES_LATE_RUNS_LOOP_SECONDS='5.0' (from defaults)
PREFECT_API_SERVICES_PAUSE_EXPIRATIONS_ENABLED='True' (from defaults)
PREFECT_API_SERVICES_PAUSE_EXPIRATIONS_LOOP_SECONDS='5.0' (from defaults)
PREFECT_API_SERVICES_SCHEDULER_DEPLOYMENT_BATCH_SIZE='100' (from defaults)
PREFECT_API_SERVICES_SCHEDULER_ENABLED='True' (from defaults)
PREFECT_API_SERVICES_SCHEDULER_INSERT_BATCH_SIZE='500' (from defaults)
PREFECT_API_SERVICES_SCHEDULER_LOOP_SECONDS='60.0' (from defaults)
PREFECT_API_SERVICES_SCHEDULER_MAX_RUNS='100' (from defaults)
PREFECT_API_SERVICES_SCHEDULER_MAX_SCHEDULED_TIME='100 days, 0:00:00' (from defaults)
PREFECT_API_SERVICES_SCHEDULER_MIN_RUNS='3' (from defaults)
PREFECT_API_SERVICES_SCHEDULER_MIN_SCHEDULED_TIME='1:00:00' (from defaults)
PREFECT_API_SERVICES_TASK_RUN_RECORDER_ENABLED='True' (from defaults)
PREFECT_API_SERVICES_TRIGGERS_ENABLED='True' (from defaults)
PREFECT_API_SSL_CERT_FILE='None' (from defaults)
PREFECT_API_TASK_CACHE_KEY_MAX_LENGTH='2000' (from defaults)
PREFECT_API_TLS_INSECURE_SKIP_VERIFY='False' (from defaults)
PREFECT_API_URL='<http://localhost:4200/api>' (from env)
PREFECT_ASYNC_FETCH_STATE_RESULT='False' (from defaults)
PREFECT_CLIENT_CSRF_SUPPORT_ENABLED='True' (from defaults)
PREFECT_CLIENT_ENABLE_METRICS='False' (from defaults)
PREFECT_CLIENT_MAX_RETRIES='5' (from defaults)
PREFECT_CLIENT_METRICS_PORT='4201' (from defaults)
PREFECT_CLIENT_RETRY_EXTRA_CODES='set()' (from defaults)
PREFECT_CLIENT_RETRY_JITTER_FACTOR='0.2' (from defaults)
PREFECT_CLI_COLORS='True' (from defaults)
PREFECT_CLI_PROMPT='None' (from defaults)
PREFECT_CLI_WRAP_LINES='True' (from defaults)
PREFECT_CLOUD_API_URL='<https://api.prefect.cloud/api>' (from defaults)
PREFECT_CLOUD_UI_URL='<https://app.prefect.cloud>' (from defaults)
PREFECT_DEBUG_MODE='false' (from env)
PREFECT_DEFAULT_DOCKER_BUILD_NAMESPACE='None' (from defaults)
PREFECT_DEFAULT_RESULT_STORAGE_BLOCK='None' (from defaults)
PREFECT_DEFAULT_WORK_POOL_NAME='None' (from defaults)
PREFECT_DEPLOYMENT_CONCURRENCY_SLOT_WAIT_SECONDS='30.0' (from defaults)
PREFECT_DEPLOYMENT_SCHEDULE_MAX_SCHEDULED_RUNS='50' (from defaults)
PREFECT_EVENTS_EXPIRED_BUCKET_BUFFER='0:01:00' (from defaults)
PREFECT_EVENTS_MAXIMUM_LABELS_PER_RESOURCE='500' (from defaults)
PREFECT_EVENTS_MAXIMUM_RELATED_RESOURCES='500' (from defaults)
PREFECT_EVENTS_MAXIMUM_SIZE_BYTES='1500000' (from defaults)
PREFECT_EVENTS_MAXIMUM_WEBSOCKET_BACKFILL='0:15:00' (from defaults)
PREFECT_EVENTS_PROACTIVE_GRANULARITY='0:00:05' (from defaults)
PREFECT_EVENTS_RETENTION_PERIOD='3d' (from env)
PREFECT_EVENTS_WEBSOCKET_BACKFILL_PAGE_SIZE='250' (from defaults)
PREFECT_EXPERIMENTAL_ENABLE_SCHEDULE_CONCURRENCY='False' (from defaults)
PREFECT_EXPERIMENTAL_WARN='True' (from defaults)
PREFECT_FLOW_DEFAULT_RETRIES='0' (from defaults)
PREFECT_FLOW_DEFAULT_RETRY_DELAY_SECONDS='0' (from defaults)
PREFECT_HOME='/home/prefect/.prefect' (from defaults)
PREFECT_LOCAL_STORAGE_PATH='/home/prefect/.prefect/storage' (from defaults)
PREFECT_LOGGING_COLORS='True' (from defaults)
PREFECT_LOGGING_EXTRA_LOGGERS='[]' (from defaults)
PREFECT_LOGGING_INTERNAL_LEVEL='ERROR' (from defaults)
PREFECT_LOGGING_LEVEL='INFO' (from defaults)
PREFECT_LOGGING_LOG_PRINTS='False' (from defaults)
PREFECT_LOGGING_MARKUP='False' (from defaults)
PREFECT_LOGGING_SERVER_LEVEL='WARNING' (from env)
PREFECT_LOGGING_SETTINGS_PATH='/home/prefect/.prefect/logging.yml' (from defaults)
PREFECT_LOGGING_TO_API_BATCH_INTERVAL='2.0' (from defaults)
PREFECT_LOGGING_TO_API_BATCH_SIZE='4000000' (from defaults)
PREFECT_LOGGING_TO_API_ENABLED='True' (from defaults)
PREFECT_LOGGING_TO_API_MAX_LOG_SIZE='1000000' (from defaults)
PREFECT_LOGGING_TO_API_WHEN_MISSING_FLOW='warn' (from defaults)
PREFECT_MEMOIZE_BLOCK_AUTO_REGISTRATION='True' (from defaults)
PREFECT_MEMO_STORE_PATH='/home/prefect/.prefect/memo_store.toml' (from defaults)
PREFECT_MESSAGING_BROKER='prefect.server.utilities.messaging.memory' (from defaults)
PREFECT_MESSAGING_CACHE='prefect.server.utilities.messaging.memory' (from defaults)
PREFECT_PROFILES_PATH='/home/prefect/.prefect/profiles.toml' (from defaults)
PREFECT_RESULTS_DEFAULT_SERIALIZER='pickle' (from defaults)
PREFECT_RESULTS_PERSIST_BY_DEFAULT='False' (from defaults)
PREFECT_RUNNER_POLL_FREQUENCY='10' (from defaults)
PREFECT_RUNNER_PROCESS_LIMIT='5' (from defaults)
PREFECT_RUNNER_SERVER_ENABLE='False' (from defaults)
PREFECT_RUNNER_SERVER_HOST='localhost' (from defaults)
PREFECT_RUNNER_SERVER_LOG_LEVEL='ERROR' (from defaults)
PREFECT_RUNNER_SERVER_MISSED_POLLS_TOLERANCE='2' (from defaults)
PREFECT_RUNNER_SERVER_PORT='8080' (from defaults)
PREFECT_SERVER_ALLOW_EPHEMERAL_MODE='true' (from profile)
PREFECT_SERVER_ANALYTICS_ENABLED='True' (from defaults)
PREFECT_SERVER_API_HOST='0.0.0.0' (from env)
PREFECT_SERVER_API_KEEPALIVE_TIMEOUT='5' (from defaults)
PREFECT_SERVER_API_PORT='4200' (from env)
PREFECT_SERVER_CORS_ALLOWED_HEADERS='*' (from defaults)
PREFECT_SERVER_CORS_ALLOWED_METHODS='*' (from defaults)
PREFECT_SERVER_CORS_ALLOWED_ORIGINS='*' (from defaults)
PREFECT_SERVER_CSRF_PROTECTION_ENABLED='False' (from defaults)
PREFECT_SERVER_CSRF_TOKEN_EXPIRATION='1:00:00' (from defaults)
PREFECT_SERVER_EPHEMERAL_STARTUP_TIMEOUT_SECONDS='10' (from defaults)
PREFECT_SILENCE_API_URL_MISCONFIGURATION='False' (from defaults)
PREFECT_SQLALCHEMY_MAX_OVERFLOW='None' (from defaults)
PREFECT_SQLALCHEMY_POOL_SIZE='None' (from defaults)
PREFECT_TASKS_REFRESH_CACHE='False' (from defaults)
PREFECT_TASK_DEFAULT_RETRIES='0' (from defaults)
PREFECT_TASK_DEFAULT_RETRY_DELAY_SECONDS='0' (from defaults)
PREFECT_TASK_RUN_TAG_CONCURRENCY_SLOT_WAIT_SECONDS='30' (from defaults)
PREFECT_TASK_SCHEDULING_DEFAULT_STORAGE_BLOCK='None' (from defaults)
PREFECT_TASK_SCHEDULING_DELETE_FAILED_SUBMISSIONS='True' (from defaults)
PREFECT_TASK_SCHEDULING_MAX_RETRY_QUEUE_SIZE='100' (from defaults)
PREFECT_TASK_SCHEDULING_MAX_SCHEDULED_QUEUE_SIZE='1000' (from defaults)
PREFECT_TASK_SCHEDULING_PENDING_TASK_TIMEOUT='0:00:00' (from defaults)
PREFECT_TEST_MODE='False' (from defaults)
PREFECT_TEST_SETTING='FOO' (from defaults)
...
m
Your message was too long, here's your message at the allowed limit: this is my output for
prefect config view --show-defaults
```<http //localhost 4200> PREFECT_PROFILE='ephemeral' PREFECT_API_BLOCKS_REGISTER_ON_START='True' (from defaults) PREFECT_API_DATABASE_CONNECTION_TIMEOUT='5 0' (from defaults) PREFECT_API_DATABASE_CONNECTION_URL='****' (from env) PREFECT_API_DATABASE_DRIVER='None' (from defaults) PREFECT_API_DATABASE_ECHO='False' (from defaults) PREFECT_API_DATABASE_HOST='None' (from defaults) PREFECT_API_DATABASE_MIGRATE_ON_START='True' (from defaults) PREFECT_API_DATABASE_NAME='None' (from defaults) PREFECT_API_DATABASE_PASSWORD='****' (from defaults) PREFECT_API_DATABASE_PORT='None' (from defaults) PREFECT_API_DATABASE_TIMEOUT='10 0' (from defaults) PREFECT_API_DATABASE_USER='None' (from defaults) PREFECT_API_DEFAULT_LIMIT='200' (from defaults) PREFECT_API_ENABLE_HTTP2='False' (from defaults) PREFECT_API_ENABLE_METRICS='False' (from defaults) PREFECT_API_EVENTS_RELATED_RESOURCE_CACHE_TTL='0 05 00' (from defaults) PREFECT_API_EVENTS_STREAM_OUT_ENABLED='True' (from defaults) PREFECT_API_KEY='****' (from defaults) PREFECT_API_LOG_RETRYABLE_ERRORS='False' (from defaults) PREFECT_API_MAX_FLOW_RUN_GRAPH_ARTIFACTS='10000' (from defaults) PREFECT_API_MAX_FLOW_RUN_GRAPH_NODES='10000' (from defaults) PREFECT_API_REQUEST_TIMEOUT='60 0' (from defaults) PREFECT_API_SERVICES_CANCELLATION_CLEANUP_ENABLED='True' (from defaults) PREFECT_API_SERVICES_CANCELLATION_CLEANUP_LOOP_SECONDS='20 0' (from defaults) PREFECT_API_SERVICES_EVENT_PERSISTER_BATCH_SIZE='20' (from defaults) PREFECT_API_SERVICES_EVENT_PERSISTER_ENABLED='True' (from defaults) PREFECT_API_SERVICES_EVENT_PERSISTER_FLUSH_INTERVAL='5 0' (from defaults) PREFECT_API_SERVICES_FLOW_RUN_NOTIFICATIONS_ENABLED='True' (from defaults) PREFECT_API_SERVICES_FOREMAN_DEPLOYMENT_LAST_POLLED_TIMEOUT_SECONDS='60' (from defaults) PREFECT_API_SERVICES_FOREMAN_ENABLED='True' (from defaults) PREFECT_API_SERVICES_FOREMAN_FALLBACK_HEARTBEAT_INTERVAL_SECONDS='30' (from defaults) PREFECT_API_SERVICES_FOREMAN_INACTIVITY_HEARTBEAT_MULTIPLE='3' (from defaults) PREFECT_API_SERVICES_FOREMAN_LOOP_SECONDS='15 0' (from defaults) PREFECT_API_SERVICES_FOREMAN_WORK_QUEUE_LAST_POLLED_TIMEOUT_SECONDS='60' (from defaults) PREFECT_API_SERVICES_LATE_RUNS_AFTER_SECONDS='0 00 15' (from defaults) PREFECT_API_SERVICES_LATE_RUNS_ENABLED='True' (from defaults) PREFECT_API_SERVICES_LATE_RUNS_LOOP_SECONDS='5 0' (from defaults) PREFECT_API_SERVICES_PAUSE_EXPIRATIONS_ENABLED='True' (from defaults) PREFECT_API_SERVICES_PAUSE_EXPIRATIONS_LOOP_SECONDS='5 0' (from defaults) PREFECT_API_SERVICES_SCHEDULER_DEPLOYMENT_BATCH_SIZE='100' (from defaults) PREFECT_API_SERVICES_SCHEDULER_ENABLED='True' (from defaults) PREFECT_API_SERVICES_SCHEDULER_INSERT_BATCH_SIZE='500' (from defaults) PREFECT_API_SERVICES_SCHEDULER_LOOP_SECONDS='60 0' (from defaults) PREFECT_API_SERVICES_SCHEDULER_MAX_RUNS='100' (from defaults) PREFECT_API_SERVICES_SCHEDULER_MAX_SCHEDULED_TIME='100 days 0 00 00' (from defaults) PREFECT_API_SERVICES_SCHEDULER_MIN_RUNS='3' (from defaults) PREFECT_API_SERVICES_SCHEDULER_MIN_SCHEDULED_TIME='1 00 00' (from defaults) PREFECT_API_SERVICES_TASK_RUN_RECORDER_ENABLED='True' (from defaults) PREFECT_API_SERVICES_TRIGGERS_ENABLED='True' (from defaults) PREFECT_API_SSL_CERT_FILE='None' (from defaults) PREFECT_API_TASK_CACHE_KEY_MAX_LENGTH='2000' (from defaults) PREFECT_API_TLS_INSECURE_SKIP_VERIFY='False' (from defaults) PREFECT_API_URL='<http //localhost 4200/api>' (from env) PREFECT_ASYNC_FETCH_STATE_RESULT='False' (from defaults) PREFECT_CLIENT_CSRF_SUPPORT_ENABLED='True' (from defaults) PREFECT_CLIENT_ENABLE_METRICS='False' (from defaults) PREFECT_CLIENT_MAX_RETRIES='5' (from defaults) PREFECT_CLIENT_METRICS_PORT='4201' (from defaults) PREFECT_CLIENT_RETRY_EXTRA_CODES='set()' (from defaults) PREFECT_CLIENT_RETRY_JITTER_FACTOR='0 2' (from defaults) PREFECT_CLI_COLORS='True' (from defaults) PREFECT_CLI_PROMPT='None' (from de
faults) PREFECT_CLI_WRAP_LINES='True' (from defaults) PREFECT_CLOUD_API_URL='<https //api prefect cloud/api>' (from defaults) PREFECT_CLOUD_UI_URL='<https //app prefect cloud>' (from defaults) PREFECT_DEBUG_MODE='false' (from env) PREFECT_DEFAULT_DOCKER_BUILD_NAMESPACE='None' (from defaults) PREFECT_DEFAULT_RESULT_STORAGE_BLOCK='None' (from defaults) PREFECT_DEFAULT_WORK_POOL_NAME='None' (from defaults) PREFECT_DEPLOYMENT_CONCURRENCY_SLOT_WAIT_SECONDS='30 0' (from defaults) PREFECT_DEPLOYMENT_SCHEDULE_MAX_SCHEDULED_RUNS='50' (from defaults) PREFECT_EVENTS_EXPIRED_BUCKET_BUFFER='0 01 00' (from defaults) PREFECT_EVENTS_MAXIMUM_LABELS_PER_RESOURCE='500' (from defaults) PREFECT_EVENTS_MAXIMUM_RELATED_RESOURCES='500' (from defaults) PREFECT_EVENTS_MAXIMUM_SIZE_BYTES='1500000' (from defaults) PREFECT_EVENTS_MAXIMUM_WEBSOCKET_BACKFILL='0 15 00' (from defaults) PREFECT_EVENTS_PROACTIVE_GRANULARITY='0 00 05' (from defaults) PREFECT_EVENTS_RETENTION_PERIOD='3d' (from env) PREFECT_EVENTS_WEBSOCKET_BACKFILL_PAGE_SIZE='250' (from defaults) PREFECT_EXPERIMENTAL_ENABLE_SCHEDULE_CONCURRENCY='False' (from defaults) PREFECT_EXPERIMENTAL_WARN='True' (from defaults) PREFECT_FLOW_DEFAULT_RETRIES='0' (from defaults) PREFECT_FLOW_DEFAULT_RETRY_DELAY_SECONDS='0' (from defaults) PREFECT_HOME='/home/prefect/ prefect' (from defaults) PREFECT_LOCAL_STORAGE_PATH='/home/prefect/ prefect/storage' (from defaults) PREFECT_LOGGING_COLORS='True' (from defaults) PREFECT_LOGGING_EXTRA_LOGGERS='[]' (from defaults) PREFECT_LOGGING_INTERNAL_LEVEL='ERROR' (from defaults) PREFECT_LOGGING_LEVEL='INFO' (from defaults) PREFECT_LOGGING_LOG_PRINTS='False' (from defaults) PREFECT_LOGGING_MARKUP='False' (from defaults) PREFECT_LOGGING_SERVER_LEVEL='WARNING' (from env) PREFECT_LOGGING_SETTINGS_PATH='/home/prefect/ prefect/logging yml' (from defaults) PREFECT_LOGGING_TO_API_BATCH_INTERVAL='2 0' (from defaults) PREFECT_LOGGING_TO_API_BATCH_SIZE='4000000' (from defaults) PREFECT_LOGGING_TO_API_ENABLED='True' (from defaults) PREFECT_LOGGING_TO_API_MAX_LOG_SIZE='1000000' (from defaults) PREFECT_LOGGING_TO_API_WHEN_MISSING_FLOW='warn' (from defaults) PREFECT_MEMOIZE_BLOCK_AUTO_REGISTRATION='True' (from defaults) PREFECT_MEMO_STORE_PATH='/home/prefect/ prefect/memo_store toml' (from defaults) PREFECT_MESSAGING_BROKER='prefect server utilities messaging memory' (from defaults) PREFECT_MESSAGING_CACHE='prefect server utilities messaging memory' (from defaults) PREFECT_PROFILES_PATH='/home/prefect/ prefect/profiles toml' (from defaults) PREFECT_RESULTS_DEFAULT_SERIALIZER='pickle' (from defaults) PREFECT_RESULTS_PERSIST_BY_DEFAULT='False' (from defaults) PREFECT_RUNNER_POLL_FREQUENCY='10' (from defaults) PREFECT_RUNNER_PROCESS_LIMIT='5' (from defaults) PREFECT_RUNNER_SERVER_ENABLE='False' (from defaults) PREFECT_RUNNER_SERVER_HOST='localhost' (from defaults) PREFECT_RUNNER_SERVER_LOG_LEVEL='ERROR' (from defaults) PREFECT_RUNNER_SERVER_MISSED_POLLS_TOLERANCE='2' (from defaults) PREFECT_RUNNER_SERVER_PORT='8080' (from defaults) PREFECT_SERVER_ALLOW_EPHEMERAL_MODE='true' (from profile) PREFECT_SERVER_ANALYTICS_ENABLED='True' (from defaults) PREFECT_SERVER_API_HOST='0 0 0 0' (from env) PREFECT_SERVER_API_KEEPALIVE_TIMEOUT='5' (from defaults) PREFECT_SERVER_API_PORT='4200' (from env) PREFECT_SERVER_CORS_ALLOWED_HEADERS='*' (from defaults) PREFECT_SERVER_CORS_ALLOWED_METHODS='*' (from defaults) PREFECT_SERVER_CORS_ALLOWED_ORIGINS='*' (from defaults) PREFECT_SERVER_CSRF_PROTECTION_ENABLED='False' (from defaults) PREFECT_SERVER_CSRF_TOKEN_EXPIRATION='1 00 00' (from defaults) PREFECT_SERVER_EPHEMERAL_STARTUP_TIMEOUT_SECONDS='10' (from defaults) PREFECT_SILENCE_API_URL_MISCONFIGURATION='False' (from defaults) PREFECT_SQLALCHEMY_MAX_OVERFLOW='None' (from defaults) PREFECT_SQLALCHEMY_POOL_SIZE='None' (from defaults) PREFECT_TASKS_REFRESH_CACHE='False' (from defaults) PREFECT_TASK_DEFAULT_RETRIES='0' (from defaults) PREFECT_TASK_DEFAULT_RETRY_DELAY_SECONDS='0' (from defaults) PREFECT_TASK_RUN_TAG_CO
NCURRENCY_SLOT_WAIT_SECONDS='30' (from defaults) PREFECT_TASK_SCHEDULING_DEFAULT_STORAGE_BLOCK='None' (from defaults) PREFECT_TASK_SCHEDULING_DELETE_FAILED_SUBMISSIONS='True' (from defaults) PREFECT_TASK_SCHEDULING_MAX_RETRY_QUEUE_SIZE='100' (from defaults) PREFECT_TASK_SCHEDULING_MAX_SCHEDULED_QUEUE_SIZE='1000' (from defaults) PREFECT_TASK_SCHEDULING_PENDING_TASK_TIMEOUT='0 00 00' (from defaults) PREFECT_TEST_MODE='False' (from defaults) PREFECT_TEST_SETTING='FOO' (from defaults) PREFECT_UI_API_URL='<http //localhost 4200/api>' (from defaults) PREFECT_UI_ENABLED='true' (from env) PREFECT_UI_SERVE_BASE='/' (from defaults) PREFECT_UI_STATIC_DIRECTORY='/ui_build' (from env) PREFECT_UI_URL='<http //localhost 4200>'
thought for 55.9 seconds
Thanks for the config dump — this confirms what’s going on. - You’ve set
PREFECT_EVENTS_RETENTION_PERIOD='3d'
, so Prefect Server’s events service is trying to prune any
events.occurred < now() - 3 days
. - Your server’s DB statement timeout is the default
PREFECT_API_DATABASE_TIMEOUT='10.0'
seconds. - The service runs its prune loop periodically (your logs show ~15m cadence) and cancels the DELETE when it hits that 10s client-side timeout. Postgres reports that as:
Copy code
ERROR:  canceling statement due to user request
which matches a client cancel, not a Postgres
statement_timeout
. How to stop the noise (and let pruning complete) 1) Increase the Prefect Server DB statement timeout - Give the delete more time to finish:
Copy code
# try 60–120s depending on your DB
  export PREFECT_API_DATABASE_TIMEOUT=60
  # then restart your Prefect Server
This is the single most effective fix in your setup. 2) Ensure the right index exists - The prune condition filters on `occurred`; make sure it’s indexed:
Copy code
CREATE INDEX IF NOT EXISTS idx_events_occurred ON events (occurred);
Then run a maintenance pass:
Copy code
VACUUM (VERBOSE, ANALYZE) events;
3) Optional: temporarily reduce delete pressure or drain backlog - If you have a very large backlog older than 3d, you can: - Temporarily lengthen retention (e.g., to 7d) to stop deletes right away, then drain in chunks manually during a maintenance window:
Copy code
-- repeat until no rows remain
    DELETE FROM events
    WHERE id IN (
      SELECT id
      FROM events
      WHERE occurred < now() - interval '3 days'
      LIMIT 5000
    );
    VACUUM (ANALYZE) events;
- Or just keep 3d and let the service catch up after you increase the timeout. Notes on related settings you showed -
PREFECT_API_SERVICES_EVENT_PERSISTER_*
entries (batch size, flush interval) control write batching; they are not the prune schedule. - The prune interval itself isn’t exposed in your config dump; the ~15m cadence you see is the service’s internal loop. - You do not need to change Postgres
statement_timeout
here — the cancel is client-driven. If you tell me how you’re running Prefect Server (Docker/Compose, Kubernetes, bare process), I can share exact steps to set the env and restart. References - Database maintenance tips - Event persister service API
h
@Marvin i am running via helm on kubernetes
m
thought for 106.2 seconds
Great — since you’re on Helm/Kubernetes, you’ve got two practical options to apply the fix (increase the API DB timeout and keep your retention as-is): Option A: quick rollout using kubectl (no chart edits) 1) Find the API deployment name:
Copy code
kubectl -n <namespace> get deploy | grep api
2) Set the timeout and (optionally) confirm retention:
Copy code
kubectl -n <namespace> set env deploy/<api-deployment> PREFECT_API_DATABASE_TIMEOUT=60
# you already have 3d, but here for completeness:
kubectl -n <namespace> set env deploy/<api-deployment> PREFECT_EVENTS_RETENTION_PERIOD=3d
3) Wait for the rollout to complete:
Copy code
kubectl -n <namespace> rollout status deploy/<api-deployment>
Option B: make it permanent via Helm values and upgrade - Add these env vars to your values.yaml for the API container, then helm upgrade. The exact key differs slightly by chart version; it’s typically either
server.api.env
(list of name/value pairs) or
server.api.extraEnv
. Check with:
Copy code
helm show values prefect/prefect-server | less
Then add something like:
Copy code
server:
  api:
    env:
      - name: PREFECT_API_DATABASE_TIMEOUT
        value: "60"
      - name: PREFECT_EVENTS_RETENTION_PERIOD
        value: "3d"
Apply:
Copy code
helm upgrade <release-name> prefect/prefect-server -n <namespace> -f values.yaml
Highly recommended DB-side tweaks (one-time) - Index on occurred to speed up pruning:
Copy code
CREATE INDEX IF NOT EXISTS idx_events_occurred ON events (occurred);
VACUUM (VERBOSE, ANALYZE) events;
If you need a hand running that in your cluster (Bitnami PostgreSQL subchart or external DB), I can share a one-off Kubernetes Job manifest to run psql. Why this works - Your config shows
PREFECT_EVENTS_RETENTION_PERIOD='3d'
and
PREFECT_API_DATABASE_TIMEOUT='10.0'
. The pruning delete is getting cancelled by the client when it hits the 10s limit, which Postgres logs as “canceling statement due to user request”. Raising
PREFECT_API_DATABASE_TIMEOUT
(e.g., 60s) usually lets each delete finish and removes the log spam. References - Prefect Helm repo: Prefect Helm - DB maintenance tips: Database maintenance - Event persister service API: Event persister API If you share your namespace and Helm release name, I can tailor the exact commands (and a psql Job) for your setup.