Hendrik
09/05/2025, 2:01 PMMarvin
09/05/2025, 2:01 PMMarvin
09/05/2025, 2:01 PMmax_wal_size
, min_wal_size
, and shared_buffers
directly in your Postgres release’s values.
Depending on the Bitnami chart version, you’ll use one of these patterns:
- Newer Bitnami PostgreSQL charts (primary/replica split):
values.yaml
primary:
extendedConfiguration: |
shared_buffers = '2GB'
max_wal_size = '4GB'
min_wal_size = '1GB'
- Older Bitnami PostgreSQL charts:
values.yaml
postgresqlExtendedConf:
shared_buffers: '2GB'
max_wal_size: '4GB'
min_wal_size: '1GB'
Apply with your normal Helm install/upgrade:
helm upgrade --install my-postgres bitnami/postgresql -n <namespace> -f values.yaml
Notes:
- Make sure your Postgres pod has enough memory for shared_buffers
(configure primary.resources
accordingly).
- Use MB/GB units that Postgres accepts (e.g., 512MB
, 2GB
) and quote values to avoid YAML type issues.
- After changing these, the Postgres pods will restart to pick up the new config.
Docs/References:
- Bitnami PostgreSQL chart README and values: bitnami/charts: postgresql
- Artifact Hub (values reference): Artifact Hub: bitnami/postgresql
If you share your Bitnami chart version (e.g., helm show chart bitnami/postgresql | grep version
) or your current values.yaml
, I can tailor the exact fields for your setup.Hendrik
09/05/2025, 2:07 PMcommonAnnotations: {}
commonLabels: {}
fullnameOverride: prefect-server
ingress:
annotations: {}
className: ''
enabled: false
extraHosts: null
extraPaths: null
extraRules: null
extraTls: null
host:
hostname: prefect.local
path: /
pathType: ImplementationSpecific
selfSigned: false
servicePort: server-svc-port
tls: false
nameOverride: ''
namespaceOverride: ''
postgresql:
auth:
database: server
enablePostgresUser: false
password: mysecurepassword
username: prefect-admin
enabled: true
image:
tag: 14.3.0
primary:
initdb:
user: postgres
persistence:
enabled: true
postgresqlConfiguration:
checkpoint_completion_target: '0.9'
checkpoint_timeout: 15min
max_wal_size: 4GB
min_wal_size: 1GB
shared_buffers: 512MB
statement_timeout: 30min
secret:
create: true
database: ''
host: ''
name: ''
password: ''
port: ''
username: ''
server:
affinity: {}
autoscaling:
enabled: false
maxReplicas: 100
minReplicas: 1
targetCPU: 80
targetMemory: 80
containerSecurityContext:
allowPrivilegeEscalation: false
capabilities: {}
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1001
debug: false
env:
- name: PREFECT_EVENTS_RETENTION_PERIOD
value: 3d
extraArgs: null
extraContainers: null
extraEnvVarsCM: ''
extraEnvVarsSecret: ''
extraVolumeMounts: null
extraVolumes: null
image:
prefectTag: 3.0.10-python3.11-kubernetes
pullPolicy: IfNotPresent
pullSecrets:
- gitlab-token-auth
repository: prefecthq/prefect
livenessProbe:
config:
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
enabled: false
loggingLevel: WARNING
nodeSelector: {}
podAnnotations: {}
podLabels: {}
podSecurityContext:
fsGroup: 1001
runAsNonRoot: true
runAsUser: 1001
prefectApiHost: 0.0.0.0
prefectApiUrl: <http://localhost:4200/api>
priorityClassName: ''
readinessProbe:
config:
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
enabled: false
replicaCount: 1
resources:
limits:
cpu: '1'
memory: 1Gi
requests:
cpu: 500m
memory: 512Mi
revisionHistoryLimit: 10
tolerations: null
uiConfig:
enabled: true
prefectUiApiUrl: ''
prefectUiStaticDirectory: /ui_build
prefectUiUrl: ''
service:
annotations: {}
clusterIP: ''
externalTrafficPolicy: Cluster
extraPorts: []
nodePort: ''
port: 4200
targetPort: 4200
type: ClusterIP
serviceAccount:
annotations: {}
create: true
name: prefect-server
global:
cattle:
systemProjectId: p-kmh5z
Marvin
09/05/2025, 2:07 PMMarvin
09/05/2025, 2:08 PMpostgresql
block in your Prefect Server values. The Worker chart does not create a database; only the Server chart does.
The key you used (primary.postgresqlConfiguration
) isn’t one that Bitnami’s PostgreSQL chart reads. Use one of these (which one depends on the Bitnami chart version that your Prefect Server chart depends on):
- Newer Bitnami chart (recommended): primary.extendedConfiguration
(multiline string of postgresql.conf entries)
- Older Bitnami chart: postgresqlExtendedConf
(map of settings)
Here’s your values adapted to both styles — pick the one that matches your Bitnami version.
Newer Bitnami (primary.extendedConfiguration)
postgresql:
enabled: true
auth:
enablePostgresUser: false
username: prefect-admin
password: mysecurepassword
database: server
image:
tag: 14.3.0
primary:
initdb:
user: postgres
persistence:
enabled: true
# Postgres tunables
extendedConfiguration: |
checkpoint_completion_target = '0.9'
checkpoint_timeout = '15min'
max_wal_size = '4GB'
min_wal_size = '1GB'
shared_buffers = '512MB'
statement_timeout = '30min'
# Ensure memory resources are compatible with shared_buffers
resources:
requests:
cpu: "250m"
memory: "1Gi"
limits:
cpu: "1"
memory: "2Gi"
Older Bitnami (postgresqlExtendedConf)
postgresql:
enabled: true
auth:
enablePostgresUser: false
username: prefect-admin
password: mysecurepassword
database: server
image:
tag: 14.3.0
primary:
initdb:
user: postgres
persistence:
enabled: true
# Postgres tunables
postgresqlExtendedConf:
checkpoint_completion_target: '0.9'
checkpoint_timeout: '15min'
max_wal_size: '4GB'
min_wal_size: '1GB'
shared_buffers: '512MB'
statement_timeout: '30min'
How to tell which one you need
- Check the Bitnami PostgreSQL dependency version in the Prefect Server chart you’re running, then consult its README/values:
- Prefect Helm charts: Prefect Helm: prefect-server
- Bitnami PostgreSQL chart: Bitnami Helm: postgresql
- If you have the chart version handy, you can also helm show values
for that Bitnami version and search for extendedConfiguration
vs postgresqlExtendedConf
.
A couple of tips
- Quote sizes and durations (e.g., '4GB'
, '15min'
) to avoid YAML typing issues.
- Make sure the pod’s memory requests/limits support your shared_buffers
choice.
- After applying, the Postgres pod will restart. You can verify the settings from inside the pod:
kubectl exec -n <ns> -it <postgres-pod> -- psql -U <user> -d <db> -c "show shared_buffers; show max_wal_size; show min_wal_size; show statement_timeout;"
- Only the Server chart manages Postgres; you don’t need to (and can’t) configure a DB in the Worker chart.
If you share your Prefect Server chart version (and I can infer the Bitnami dependency), I can tell you exactly which field your setup expects.Hendrik
09/05/2025, 2:10 PMMarvin
09/05/2025, 2:10 PMMarvin
09/05/2025, 2:11 PMbitnami/postgresql:14.3.0
you’re setting is the Docker image tag; the config keys you can use are determined by the Bitnami PostgreSQL Helm chart version that the Prefect Server chart depends on.
In the Prefect Server chart, Postgres is created only by the Server chart’s postgresql
subchart. To pass tunables like max_wal_size
, use the Bitnami chart’s supported keys. In most recent versions, that’s primary.extendedConfiguration
(multiline string). Older versions use postgresqlExtendedConf
(map).
Try this first (most likely to work):
postgresql:
enabled: true
auth:
enablePostgresUser: false
username: prefect-admin
password: mysecurepassword
database: server
image:
tag: 14.3.0
primary:
initdb:
user: postgres
persistence:
enabled: true
extendedConfiguration: |
checkpoint_completion_target = '0.9'
checkpoint_timeout = '15min'
max_wal_size = '4GB'
min_wal_size = '1GB'
shared_buffers = '512MB'
statement_timeout = '30min'
If your Bitnami subchart is older and doesn’t support extendedConfiguration
, use:
postgresql:
enabled: true
auth:
enablePostgresUser: false
username: prefect-admin
password: mysecurepassword
database: server
image:
tag: 14.3.0
postgresqlExtendedConf:
checkpoint_completion_target: '0.9'
checkpoint_timeout: '15min'
max_wal_size: '4GB'
min_wal_size: '1GB'
shared_buffers: '512MB'
statement_timeout: '30min'
primary:
initdb:
user: postgres
persistence:
enabled: true
Apply as usual:
helm upgrade --install prefect-server prefecthq/prefect-server -n <ns> -f values.yaml
Verify the settings in the running pod:
kubectl exec -n <ns> -it <postgres-pod> -- psql -U prefect-admin -d server -c "show shared_buffers; show max_wal_size; show min_wal_size; show statement_timeout;"
How to tell which key your setup supports
- Check the Bitnami subchart version your Prefect Server release uses, then look up its README/values:
- Prefect Server chart: Prefect Helm: prefect-server
- Bitnami PostgreSQL chart: Bitnami Helm: postgresql
- Quick checks you can run:
- Show the Bitnami values your release knows about:
helm show values bitnami/postgresql | grep -n "extendedConfiguration\|postgresqlExtendedConf"
- Or inspect the installed release to find the Bitnami chart annotation:
helm get manifest <server-release> -n <ns> | grep -m1 "chart: postgresql-"
Notes
- The Worker chart does not provision a database; only the Server chart’s postgresql.enabled: true
does.
- Keep values quoted (e.g., '4GB', '15min') to avoid YAML typing issues.
- Ensure the Postgres pod has enough memory for your shared_buffers
choice (set postgresql.primary.resources
accordingly).
If you can share your Prefect Server chart version (and/or the Bitnami subchart version from the last command), I can confirm exactly which key your release expects.Hendrik
09/05/2025, 2:16 PMextendedConfiguration: |
checkpoint_completion_target = '0.9'
checkpoint_timeout = '15min'
max_wal_size = '4GB'
min_wal_size = '1GB'
shared_buffers = '512MB'
statement_timeout = '30min'
this workedMarvin
09/05/2025, 2:16 PMMarvin
09/05/2025, 2:16 PMprimary.extendedConfiguration
worked!
A few quick follow-ups you may find useful:
- Verify the live settings in the pod
kubectl exec -n <ns> -it <postgres-pod> -- psql -U prefect-admin -d server -c "
SHOW shared_buffers;
SHOW max_wal_size;
SHOW min_wal_size;
SHOW checkpoint_timeout;
SHOW statement_timeout;"
Or, for a full view:
kubectl exec -n <ns> -it <postgres-pod> -- psql -U prefect-admin -d server -c "
SELECT name, setting, unit, context, source FROM pg_settings
WHERE name IN ('shared_buffers','max_wal_size','min_wal_size','checkpoint_timeout','statement_timeout')
ORDER BY name;"
- Make sure resources match your settings (especially shared_buffers
). You can set:
postgresql:
primary:
resources:
requests:
cpu: "250m"
memory: "1Gi"
limits:
cpu: "1"
memory: "2Gi"
- Watch disk usage when increasing WAL sizes; ensure the PVC has enough headroom:
postgresql:
primary:
persistence:
enabled: true
size: 20Gi # or larger based on your WAL/data needs
- If you prefer to pass config inline at upgrade time:
helm upgrade --install prefect-server prefecthq/prefect-server -n <ns> \
--set postgresql.primary.extendedConfiguration="shared_buffers = '512MB'\nmax_wal_size = '4GB'\nmin_wal_size = '1GB'"
References
- Prefect Server Helm chart: Prefect Helm: prefect-server
- Bitnami PostgreSQL chart (values and config options): Bitnami Helm: postgresql
If you want, share your expected workload (flow volume/concurrency), and I can suggest a starting point for Postgres and Prefect API database pool sizing.Hendrik
09/05/2025, 6:44 PM2025-09-05 17:17:39.517 GMT [30589] ERROR: canceling statement due to user request
2025-09-05 17:17:39.517 GMT [30589] STATEMENT: DELETE FROM events WHERE events.occurred < $1::TIMESTAMP WITH TIME ZONE
2025-09-05 17:32:49.604 GMT [33113] ERROR: canceling statement due to user request
2025-09-05 17:32:49.604 GMT [33113] STATEMENT: DELETE FROM events WHERE events.occurred < $1::TIMESTAMP WITH TIME ZONE
2025-09-05 17:47:59.706 GMT [35513] ERROR: canceling statement due to user request
2025-09-05 17:47:59.706 GMT [35513] STATEMENT: DELETE FROM events WHERE events.occurred < $1::TIMESTAMP WITH TIME ZONE
2025-09-05 18:03:09.787 GMT [38281] ERROR: canceling statement due to user request
2025-09-05 18:03:09.787 GMT [38281] STATEMENT: DELETE FROM events WHERE events.occurred < $1::TIMESTAMP WITH TIME ZONE
2025-09-05 18:18:19.868 GMT [40700] ERROR: canceling statement due to user request
2025-09-05 18:18:19.868 GMT [40700] STATEMENT: DELETE FROM events WHERE events.occurred < $1::TIMESTAMP WITH TIME ZONE
2025-09-05 18:33:29.948 GMT [43316] ERROR: canceling statement due to user request
2025-09-05 18:33:29.948 GMT [43316] STATEMENT: DELETE FROM events WHERE events.occurred < $1::TIMESTAMP WITH TIME ZONE
Marvin
09/05/2025, 6:44 PMMarvin
09/05/2025, 6:51 PMevents
table and runs a periodic pruning job to delete old rows.
- The job runs roughly on a fixed cadence (your timestamps show ~15 minutes) and issues
DELETE FROM events WHERE events.occurred < <retention_cutoff>
- If the delete runs long (e.g., many rows to prune, slow disk, VACUUM needed), the service cancels the statement when its time budget ends. Postgres logs this as:
ERROR: canceling statement due to user request
which is a client-initiated cancel, not a server statement_timeout
.
This is safe to ignore functionally, but it can be noisy if every cycle cancels.
How to reduce the log spam
- Make the delete cheaper:
- Ensure there’s an index on `events(occurred)`:
CREATE INDEX IF NOT EXISTS idx_events_occurred ON events (occurred);
- Run regular maintenance:
VACUUM (VERBOSE, ANALYZE) events;
- Let it delete for longer before it gets cancelled:
- Increase Prefect Server’s DB operation timeout (used by the server’s async Postgres engine). For example:
export PREFECT_SERVER_DATABASE_TIMEOUT=60
Then restart the server.
- Reduce the size of each delete batch so each statement finishes faster (fewer rows per DELETE). Look for the server service settings for the Event Persister and lower the delete batch size. For example, in environment variables:
# example values – tune to your DB
export PREFECT_SERVER_SERVICES_EVENT_PERSISTER_DELETE_BATCH_SIZE=2000
- Optionally adjust event retention so there are fewer rows to prune each cycle (shorter retention = smaller deletes). Check your server events settings and set a shorter retention window if that fits your needs.
Find the exact knobs in your install
- You can list your active settings and defaults, then search for the relevant keys:
prefect config view --show-defaults
Look for:
- server.database.timeout
- server.services.event_persister.*
(e.g., delete batch size)
- server.events.*
(retention)
References
- Prefect 3 database maintenance tips: Database maintenance
- Event persister service (API reference): Event persister API
If you can share:
- Your Prefect version
- Approximate size of the events
table and retention goal
I can suggest concrete values for the timeout/batch size and a maintenance plan tailored to your setup.Hendrik
09/05/2025, 7:04 PMprefect config view --show-defaults
<http://localhost:4200>
PREFECT_PROFILE='ephemeral'
PREFECT_API_BLOCKS_REGISTER_ON_START='True' (from defaults)
PREFECT_API_DATABASE_CONNECTION_TIMEOUT='5.0' (from defaults)
PREFECT_API_DATABASE_CONNECTION_URL='********' (from env)
PREFECT_API_DATABASE_DRIVER='None' (from defaults)
PREFECT_API_DATABASE_ECHO='False' (from defaults)
PREFECT_API_DATABASE_HOST='None' (from defaults)
PREFECT_API_DATABASE_MIGRATE_ON_START='True' (from defaults)
PREFECT_API_DATABASE_NAME='None' (from defaults)
PREFECT_API_DATABASE_PASSWORD='********' (from defaults)
PREFECT_API_DATABASE_PORT='None' (from defaults)
PREFECT_API_DATABASE_TIMEOUT='10.0' (from defaults)
PREFECT_API_DATABASE_USER='None' (from defaults)
PREFECT_API_DEFAULT_LIMIT='200' (from defaults)
PREFECT_API_ENABLE_HTTP2='False' (from defaults)
PREFECT_API_ENABLE_METRICS='False' (from defaults)
PREFECT_API_EVENTS_RELATED_RESOURCE_CACHE_TTL='0:05:00' (from defaults)
PREFECT_API_EVENTS_STREAM_OUT_ENABLED='True' (from defaults)
PREFECT_API_KEY='********' (from defaults)
PREFECT_API_LOG_RETRYABLE_ERRORS='False' (from defaults)
PREFECT_API_MAX_FLOW_RUN_GRAPH_ARTIFACTS='10000' (from defaults)
PREFECT_API_MAX_FLOW_RUN_GRAPH_NODES='10000' (from defaults)
PREFECT_API_REQUEST_TIMEOUT='60.0' (from defaults)
PREFECT_API_SERVICES_CANCELLATION_CLEANUP_ENABLED='True' (from defaults)
PREFECT_API_SERVICES_CANCELLATION_CLEANUP_LOOP_SECONDS='20.0' (from defaults)
PREFECT_API_SERVICES_EVENT_PERSISTER_BATCH_SIZE='20' (from defaults)
PREFECT_API_SERVICES_EVENT_PERSISTER_ENABLED='True' (from defaults)
PREFECT_API_SERVICES_EVENT_PERSISTER_FLUSH_INTERVAL='5.0' (from defaults)
PREFECT_API_SERVICES_FLOW_RUN_NOTIFICATIONS_ENABLED='True' (from defaults)
PREFECT_API_SERVICES_FOREMAN_DEPLOYMENT_LAST_POLLED_TIMEOUT_SECONDS='60' (from defaults)
PREFECT_API_SERVICES_FOREMAN_ENABLED='True' (from defaults)
PREFECT_API_SERVICES_FOREMAN_FALLBACK_HEARTBEAT_INTERVAL_SECONDS='30' (from defaults)
PREFECT_API_SERVICES_FOREMAN_INACTIVITY_HEARTBEAT_MULTIPLE='3' (from defaults)
PREFECT_API_SERVICES_FOREMAN_LOOP_SECONDS='15.0' (from defaults)
PREFECT_API_SERVICES_FOREMAN_WORK_QUEUE_LAST_POLLED_TIMEOUT_SECONDS='60' (from defaults)
PREFECT_API_SERVICES_LATE_RUNS_AFTER_SECONDS='0:00:15' (from defaults)
PREFECT_API_SERVICES_LATE_RUNS_ENABLED='True' (from defaults)
PREFECT_API_SERVICES_LATE_RUNS_LOOP_SECONDS='5.0' (from defaults)
PREFECT_API_SERVICES_PAUSE_EXPIRATIONS_ENABLED='True' (from defaults)
PREFECT_API_SERVICES_PAUSE_EXPIRATIONS_LOOP_SECONDS='5.0' (from defaults)
PREFECT_API_SERVICES_SCHEDULER_DEPLOYMENT_BATCH_SIZE='100' (from defaults)
PREFECT_API_SERVICES_SCHEDULER_ENABLED='True' (from defaults)
PREFECT_API_SERVICES_SCHEDULER_INSERT_BATCH_SIZE='500' (from defaults)
PREFECT_API_SERVICES_SCHEDULER_LOOP_SECONDS='60.0' (from defaults)
PREFECT_API_SERVICES_SCHEDULER_MAX_RUNS='100' (from defaults)
PREFECT_API_SERVICES_SCHEDULER_MAX_SCHEDULED_TIME='100 days, 0:00:00' (from defaults)
PREFECT_API_SERVICES_SCHEDULER_MIN_RUNS='3' (from defaults)
PREFECT_API_SERVICES_SCHEDULER_MIN_SCHEDULED_TIME='1:00:00' (from defaults)
PREFECT_API_SERVICES_TASK_RUN_RECORDER_ENABLED='True' (from defaults)
PREFECT_API_SERVICES_TRIGGERS_ENABLED='True' (from defaults)
PREFECT_API_SSL_CERT_FILE='None' (from defaults)
PREFECT_API_TASK_CACHE_KEY_MAX_LENGTH='2000' (from defaults)
PREFECT_API_TLS_INSECURE_SKIP_VERIFY='False' (from defaults)
PREFECT_API_URL='<http://localhost:4200/api>' (from env)
PREFECT_ASYNC_FETCH_STATE_RESULT='False' (from defaults)
PREFECT_CLIENT_CSRF_SUPPORT_ENABLED='True' (from defaults)
PREFECT_CLIENT_ENABLE_METRICS='False' (from defaults)
PREFECT_CLIENT_MAX_RETRIES='5' (from defaults)
PREFECT_CLIENT_METRICS_PORT='4201' (from defaults)
PREFECT_CLIENT_RETRY_EXTRA_CODES='set()' (from defaults)
PREFECT_CLIENT_RETRY_JITTER_FACTOR='0.2' (from defaults)
PREFECT_CLI_COLORS='True' (from defaults)
PREFECT_CLI_PROMPT='None' (from defaults)
PREFECT_CLI_WRAP_LINES='True' (from defaults)
PREFECT_CLOUD_API_URL='<https://api.prefect.cloud/api>' (from defaults)
PREFECT_CLOUD_UI_URL='<https://app.prefect.cloud>' (from defaults)
PREFECT_DEBUG_MODE='false' (from env)
PREFECT_DEFAULT_DOCKER_BUILD_NAMESPACE='None' (from defaults)
PREFECT_DEFAULT_RESULT_STORAGE_BLOCK='None' (from defaults)
PREFECT_DEFAULT_WORK_POOL_NAME='None' (from defaults)
PREFECT_DEPLOYMENT_CONCURRENCY_SLOT_WAIT_SECONDS='30.0' (from defaults)
PREFECT_DEPLOYMENT_SCHEDULE_MAX_SCHEDULED_RUNS='50' (from defaults)
PREFECT_EVENTS_EXPIRED_BUCKET_BUFFER='0:01:00' (from defaults)
PREFECT_EVENTS_MAXIMUM_LABELS_PER_RESOURCE='500' (from defaults)
PREFECT_EVENTS_MAXIMUM_RELATED_RESOURCES='500' (from defaults)
PREFECT_EVENTS_MAXIMUM_SIZE_BYTES='1500000' (from defaults)
PREFECT_EVENTS_MAXIMUM_WEBSOCKET_BACKFILL='0:15:00' (from defaults)
PREFECT_EVENTS_PROACTIVE_GRANULARITY='0:00:05' (from defaults)
PREFECT_EVENTS_RETENTION_PERIOD='3d' (from env)
PREFECT_EVENTS_WEBSOCKET_BACKFILL_PAGE_SIZE='250' (from defaults)
PREFECT_EXPERIMENTAL_ENABLE_SCHEDULE_CONCURRENCY='False' (from defaults)
PREFECT_EXPERIMENTAL_WARN='True' (from defaults)
PREFECT_FLOW_DEFAULT_RETRIES='0' (from defaults)
PREFECT_FLOW_DEFAULT_RETRY_DELAY_SECONDS='0' (from defaults)
PREFECT_HOME='/home/prefect/.prefect' (from defaults)
PREFECT_LOCAL_STORAGE_PATH='/home/prefect/.prefect/storage' (from defaults)
PREFECT_LOGGING_COLORS='True' (from defaults)
PREFECT_LOGGING_EXTRA_LOGGERS='[]' (from defaults)
PREFECT_LOGGING_INTERNAL_LEVEL='ERROR' (from defaults)
PREFECT_LOGGING_LEVEL='INFO' (from defaults)
PREFECT_LOGGING_LOG_PRINTS='False' (from defaults)
PREFECT_LOGGING_MARKUP='False' (from defaults)
PREFECT_LOGGING_SERVER_LEVEL='WARNING' (from env)
PREFECT_LOGGING_SETTINGS_PATH='/home/prefect/.prefect/logging.yml' (from defaults)
PREFECT_LOGGING_TO_API_BATCH_INTERVAL='2.0' (from defaults)
PREFECT_LOGGING_TO_API_BATCH_SIZE='4000000' (from defaults)
PREFECT_LOGGING_TO_API_ENABLED='True' (from defaults)
PREFECT_LOGGING_TO_API_MAX_LOG_SIZE='1000000' (from defaults)
PREFECT_LOGGING_TO_API_WHEN_MISSING_FLOW='warn' (from defaults)
PREFECT_MEMOIZE_BLOCK_AUTO_REGISTRATION='True' (from defaults)
PREFECT_MEMO_STORE_PATH='/home/prefect/.prefect/memo_store.toml' (from defaults)
PREFECT_MESSAGING_BROKER='prefect.server.utilities.messaging.memory' (from defaults)
PREFECT_MESSAGING_CACHE='prefect.server.utilities.messaging.memory' (from defaults)
PREFECT_PROFILES_PATH='/home/prefect/.prefect/profiles.toml' (from defaults)
PREFECT_RESULTS_DEFAULT_SERIALIZER='pickle' (from defaults)
PREFECT_RESULTS_PERSIST_BY_DEFAULT='False' (from defaults)
PREFECT_RUNNER_POLL_FREQUENCY='10' (from defaults)
PREFECT_RUNNER_PROCESS_LIMIT='5' (from defaults)
PREFECT_RUNNER_SERVER_ENABLE='False' (from defaults)
PREFECT_RUNNER_SERVER_HOST='localhost' (from defaults)
PREFECT_RUNNER_SERVER_LOG_LEVEL='ERROR' (from defaults)
PREFECT_RUNNER_SERVER_MISSED_POLLS_TOLERANCE='2' (from defaults)
PREFECT_RUNNER_SERVER_PORT='8080' (from defaults)
PREFECT_SERVER_ALLOW_EPHEMERAL_MODE='true' (from profile)
PREFECT_SERVER_ANALYTICS_ENABLED='True' (from defaults)
PREFECT_SERVER_API_HOST='0.0.0.0' (from env)
PREFECT_SERVER_API_KEEPALIVE_TIMEOUT='5' (from defaults)
PREFECT_SERVER_API_PORT='4200' (from env)
PREFECT_SERVER_CORS_ALLOWED_HEADERS='*' (from defaults)
PREFECT_SERVER_CORS_ALLOWED_METHODS='*' (from defaults)
PREFECT_SERVER_CORS_ALLOWED_ORIGINS='*' (from defaults)
PREFECT_SERVER_CSRF_PROTECTION_ENABLED='False' (from defaults)
PREFECT_SERVER_CSRF_TOKEN_EXPIRATION='1:00:00' (from defaults)
PREFECT_SERVER_EPHEMERAL_STARTUP_TIMEOUT_SECONDS='10' (from defaults)
PREFECT_SILENCE_API_URL_MISCONFIGURATION='False' (from defaults)
PREFECT_SQLALCHEMY_MAX_OVERFLOW='None' (from defaults)
PREFECT_SQLALCHEMY_POOL_SIZE='None' (from defaults)
PREFECT_TASKS_REFRESH_CACHE='False' (from defaults)
PREFECT_TASK_DEFAULT_RETRIES='0' (from defaults)
PREFECT_TASK_DEFAULT_RETRY_DELAY_SECONDS='0' (from defaults)
PREFECT_TASK_RUN_TAG_CONCURRENCY_SLOT_WAIT_SECONDS='30' (from defaults)
PREFECT_TASK_SCHEDULING_DEFAULT_STORAGE_BLOCK='None' (from defaults)
PREFECT_TASK_SCHEDULING_DELETE_FAILED_SUBMISSIONS='True' (from defaults)
PREFECT_TASK_SCHEDULING_MAX_RETRY_QUEUE_SIZE='100' (from defaults)
PREFECT_TASK_SCHEDULING_MAX_SCHEDULED_QUEUE_SIZE='1000' (from defaults)
PREFECT_TASK_SCHEDULING_PENDING_TASK_TIMEOUT='0:00:00' (from defaults)
PREFECT_TEST_MODE='False' (from defaults)
PREFECT_TEST_SETTING='FOO' (from defaults)
...
Marvin
09/05/2025, 7:04 PMprefect config view --show-defaults
```<http //localhost 4200> PREFECT_PROFILE='ephemeral' PREFECT_API_BLOCKS_REGISTER_ON_START='True' (from defaults) PREFECT_API_DATABASE_CONNECTION_TIMEOUT='5 0' (from defaults) PREFECT_API_DATABASE_CONNECTION_URL='****' (from env) PREFECT_API_DATABASE_DRIVER='None' (from defaults) PREFECT_API_DATABASE_ECHO='False' (from defaults) PREFECT_API_DATABASE_HOST='None' (from defaults) PREFECT_API_DATABASE_MIGRATE_ON_START='True' (from defaults) PREFECT_API_DATABASE_NAME='None' (from defaults) PREFECT_API_DATABASE_PASSWORD='****' (from defaults) PREFECT_API_DATABASE_PORT='None' (from defaults) PREFECT_API_DATABASE_TIMEOUT='10 0' (from defaults) PREFECT_API_DATABASE_USER='None' (from defaults) PREFECT_API_DEFAULT_LIMIT='200' (from defaults) PREFECT_API_ENABLE_HTTP2='False' (from defaults) PREFECT_API_ENABLE_METRICS='False' (from defaults) PREFECT_API_EVENTS_RELATED_RESOURCE_CACHE_TTL='0 05 00' (from defaults) PREFECT_API_EVENTS_STREAM_OUT_ENABLED='True' (from defaults) PREFECT_API_KEY='****' (from defaults) PREFECT_API_LOG_RETRYABLE_ERRORS='False' (from defaults) PREFECT_API_MAX_FLOW_RUN_GRAPH_ARTIFACTS='10000' (from defaults) PREFECT_API_MAX_FLOW_RUN_GRAPH_NODES='10000' (from defaults) PREFECT_API_REQUEST_TIMEOUT='60 0' (from defaults) PREFECT_API_SERVICES_CANCELLATION_CLEANUP_ENABLED='True' (from defaults) PREFECT_API_SERVICES_CANCELLATION_CLEANUP_LOOP_SECONDS='20 0' (from defaults) PREFECT_API_SERVICES_EVENT_PERSISTER_BATCH_SIZE='20' (from defaults) PREFECT_API_SERVICES_EVENT_PERSISTER_ENABLED='True' (from defaults) PREFECT_API_SERVICES_EVENT_PERSISTER_FLUSH_INTERVAL='5 0' (from defaults) PREFECT_API_SERVICES_FLOW_RUN_NOTIFICATIONS_ENABLED='True' (from defaults) PREFECT_API_SERVICES_FOREMAN_DEPLOYMENT_LAST_POLLED_TIMEOUT_SECONDS='60' (from defaults) PREFECT_API_SERVICES_FOREMAN_ENABLED='True' (from defaults) PREFECT_API_SERVICES_FOREMAN_FALLBACK_HEARTBEAT_INTERVAL_SECONDS='30' (from defaults) PREFECT_API_SERVICES_FOREMAN_INACTIVITY_HEARTBEAT_MULTIPLE='3' (from defaults) PREFECT_API_SERVICES_FOREMAN_LOOP_SECONDS='15 0' (from defaults) PREFECT_API_SERVICES_FOREMAN_WORK_QUEUE_LAST_POLLED_TIMEOUT_SECONDS='60' (from defaults) PREFECT_API_SERVICES_LATE_RUNS_AFTER_SECONDS='0 00 15' (from defaults) PREFECT_API_SERVICES_LATE_RUNS_ENABLED='True' (from defaults) PREFECT_API_SERVICES_LATE_RUNS_LOOP_SECONDS='5 0' (from defaults) PREFECT_API_SERVICES_PAUSE_EXPIRATIONS_ENABLED='True' (from defaults) PREFECT_API_SERVICES_PAUSE_EXPIRATIONS_LOOP_SECONDS='5 0' (from defaults) PREFECT_API_SERVICES_SCHEDULER_DEPLOYMENT_BATCH_SIZE='100' (from defaults) PREFECT_API_SERVICES_SCHEDULER_ENABLED='True' (from defaults) PREFECT_API_SERVICES_SCHEDULER_INSERT_BATCH_SIZE='500' (from defaults) PREFECT_API_SERVICES_SCHEDULER_LOOP_SECONDS='60 0' (from defaults) PREFECT_API_SERVICES_SCHEDULER_MAX_RUNS='100' (from defaults) PREFECT_API_SERVICES_SCHEDULER_MAX_SCHEDULED_TIME='100 days 0 00 00' (from defaults) PREFECT_API_SERVICES_SCHEDULER_MIN_RUNS='3' (from defaults) PREFECT_API_SERVICES_SCHEDULER_MIN_SCHEDULED_TIME='1 00 00' (from defaults) PREFECT_API_SERVICES_TASK_RUN_RECORDER_ENABLED='True' (from defaults) PREFECT_API_SERVICES_TRIGGERS_ENABLED='True' (from defaults) PREFECT_API_SSL_CERT_FILE='None' (from defaults) PREFECT_API_TASK_CACHE_KEY_MAX_LENGTH='2000' (from defaults) PREFECT_API_TLS_INSECURE_SKIP_VERIFY='False' (from defaults) PREFECT_API_URL='<http //localhost 4200/api>' (from env) PREFECT_ASYNC_FETCH_STATE_RESULT='False' (from defaults) PREFECT_CLIENT_CSRF_SUPPORT_ENABLED='True' (from defaults) PREFECT_CLIENT_ENABLE_METRICS='False' (from defaults) PREFECT_CLIENT_MAX_RETRIES='5' (from defaults) PREFECT_CLIENT_METRICS_PORT='4201' (from defaults) PREFECT_CLIENT_RETRY_EXTRA_CODES='set()' (from defaults) PREFECT_CLIENT_RETRY_JITTER_FACTOR='0 2' (from defaults) PREFECT_CLI_COLORS='True' (from defaults) PREFECT_CLI_PROMPT='None' (from deMarvin
09/05/2025, 7:04 PMMarvin
09/05/2025, 7:04 PMMarvin
09/05/2025, 7:05 PMMarvin
09/05/2025, 7:06 PMPREFECT_EVENTS_RETENTION_PERIOD='3d'
, so Prefect Server’s events service is trying to prune any events.occurred < now() - 3 days
.
- Your server’s DB statement timeout is the default PREFECT_API_DATABASE_TIMEOUT='10.0'
seconds.
- The service runs its prune loop periodically (your logs show ~15m cadence) and cancels the DELETE when it hits that 10s client-side timeout. Postgres reports that as:
ERROR: canceling statement due to user request
which matches a client cancel, not a Postgres statement_timeout
.
How to stop the noise (and let pruning complete)
1) Increase the Prefect Server DB statement timeout
- Give the delete more time to finish:
# try 60–120s depending on your DB
export PREFECT_API_DATABASE_TIMEOUT=60
# then restart your Prefect Server
This is the single most effective fix in your setup.
2) Ensure the right index exists
- The prune condition filters on `occurred`; make sure it’s indexed:
CREATE INDEX IF NOT EXISTS idx_events_occurred ON events (occurred);
Then run a maintenance pass:
VACUUM (VERBOSE, ANALYZE) events;
3) Optional: temporarily reduce delete pressure or drain backlog
- If you have a very large backlog older than 3d, you can:
- Temporarily lengthen retention (e.g., to 7d) to stop deletes right away, then drain in chunks manually during a maintenance window:
-- repeat until no rows remain
DELETE FROM events
WHERE id IN (
SELECT id
FROM events
WHERE occurred < now() - interval '3 days'
LIMIT 5000
);
VACUUM (ANALYZE) events;
- Or just keep 3d and let the service catch up after you increase the timeout.
Notes on related settings you showed
- PREFECT_API_SERVICES_EVENT_PERSISTER_*
entries (batch size, flush interval) control write batching; they are not the prune schedule.
- The prune interval itself isn’t exposed in your config dump; the ~15m cadence you see is the service’s internal loop.
- You do not need to change Postgres statement_timeout
here — the cancel is client-driven.
If you tell me how you’re running Prefect Server (Docker/Compose, Kubernetes, bare process), I can share exact steps to set the env and restart.
References
- Database maintenance tips
- Event persister service APIHendrik
09/05/2025, 7:10 PMMarvin
09/05/2025, 7:10 PMMarvin
09/05/2025, 7:12 PMkubectl -n <namespace> get deploy | grep api
2) Set the timeout and (optionally) confirm retention:
kubectl -n <namespace> set env deploy/<api-deployment> PREFECT_API_DATABASE_TIMEOUT=60
# you already have 3d, but here for completeness:
kubectl -n <namespace> set env deploy/<api-deployment> PREFECT_EVENTS_RETENTION_PERIOD=3d
3) Wait for the rollout to complete:
kubectl -n <namespace> rollout status deploy/<api-deployment>
Option B: make it permanent via Helm values and upgrade
- Add these env vars to your values.yaml for the API container, then helm upgrade. The exact key differs slightly by chart version; it’s typically either server.api.env
(list of name/value pairs) or server.api.extraEnv
. Check with:
helm show values prefect/prefect-server | less
Then add something like:
server:
api:
env:
- name: PREFECT_API_DATABASE_TIMEOUT
value: "60"
- name: PREFECT_EVENTS_RETENTION_PERIOD
value: "3d"
Apply:
helm upgrade <release-name> prefect/prefect-server -n <namespace> -f values.yaml
Highly recommended DB-side tweaks (one-time)
- Index on occurred to speed up pruning:
CREATE INDEX IF NOT EXISTS idx_events_occurred ON events (occurred);
VACUUM (VERBOSE, ANALYZE) events;
If you need a hand running that in your cluster (Bitnami PostgreSQL subchart or external DB), I can share a one-off Kubernetes Job manifest to run psql.
Why this works
- Your config shows PREFECT_EVENTS_RETENTION_PERIOD='3d'
and PREFECT_API_DATABASE_TIMEOUT='10.0'
. The pruning delete is getting cancelled by the client when it hits the 10s limit, which Postgres logs as “canceling statement due to user request”. Raising PREFECT_API_DATABASE_TIMEOUT
(e.g., 60s) usually lets each delete finish and removes the log spam.
References
- Prefect Helm repo: Prefect Helm
- DB maintenance tips: Database maintenance
- Event persister service API: Event persister API
If you share your namespace and Helm release name, I can tailor the exact commands (and a psql Job) for your setup.