<@ULVA73B9P> I was able to trace the lifecycle of ...
# ask-marvin
r
@Marvin I was able to trace the lifecycle of a task through the redis logs and redis is tracking the task successfully and all 3 instances of the API server and the background services are acknowledging each state change from Pending > Running > Completed. However, I found that the task_run is not updated until 15 minutes afterwards in my Postgres db, causing task states to not update as expected in the UI. Why might there be a 15 minute delay and what service is writing the task state to Postgres specifically?
m
🔄 Thinking... this may take a while
b
@Marvin did you give up answering the question?
m
✅ thought for 12.5 seconds
Not at all—I’m here and ready to help! Sorry if it looked like I dropped the thread. Could you paste your question again (or a quick summary of what you’re trying to do)? If it’s Prefect-related, the following details will help me get you an answer fast: - Prefect version (assuming 3.x unless you say otherwise) - How you’re running things (local, Docker, Kubernetes, Cloud) - Any error message/logs - A minimal code snippet or deployment/work pool setup Once I have that, I’ll dive right in.
b
@Marvin please answer @Ryan Sharp’s question:
I was able to trace the lifecycle of a task through the redis logs and redis is tracking the task successfully and all 3 instances of the API server and the background services are acknowledging each state change from Pending > Running > Completed. However, I found that the task_run is not updated until 15 minutes afterwards in my Postgres db, causing task states to not update as expected in the UI. Why might there be a 15 minute delay and what service is writing the task state to Postgres specifically?
m
🔄 Researching your answer... this may take a while 🔧 Using tool:
research_prefect_topic
📊 Tools used: •
explore_module_offerings
(1x) •
read_github_issues
(1x) •
research_prefect_topic
(3x)
r
@Marvin my task runs are not updating in the UI until 15 minutes have passed, I can see in the redis logs that the task is moving from Pending to Running to Completed as expected and each state change is acknowledged by each prefect API server and the prefect background services. However, I can see that the task_run entry for that task is not updated for 15 minutes in postgres. Why might this delay be occurring and is it normal for the Ui to be reliant on postgres for the task_run state?
m
✅ thought for 268.7 seconds
r
@Brendan Dalpe thanks, not sure why it doesnt like my question, I tried rephrasing
b
I think Marvin is having a hard time with this specific issue. I will look into why it's dying during the research. I don't think it's going to be able to answer this one right now.
m
Short answer: a 15‑minute delay is not normal. The Prefect UI gets run state from the Prefect API, and the API’s source of truth is Postgres. Redis is used for ephemeral coordination/pub‑sub, not as the durable store for run state. If Postgres doesn’t show the new
task_run
state promptly, the UI will look “stale” until the database has the update. Given your symptoms (Redis sees state transitions immediately; API/background services acknowledge; Postgres row doesn’t reflect the change for ~15 minutes), the most common root causes are: - Your API is reading from a Postgres read replica that has replication lag - Classic case if you’re on RDS/Aurora and pointed the API (or your own SQL client) at a “reader endpoint” or a load‑balancer/proxy that routes SELECTs to a replica - The UI calls the API; if the API reads from a lagging replica, it will serve stale data - PgBouncer in transaction pooling mode or a misconfigured DB proxy - Transaction pooling can break session semantics and sometimes leads to surprising read‑your‑writes issues - Prefect expects read‑your‑writes consistency; use session pooling for PgBouncer - An HTTP proxy/CDN caching API responses - Less likely based on your note about the Postgres row itself not changing, but if your SQL checks are also going to a replica, this could still be part of the picture - Database pressure or locks causing commit delays - Check DB metrics for long transactions, lock waits, or slow checkpoints; this is rarer to line up at exactly ~15 minutes, but worth ruling out What to check next 1) Verify where you’re reading/writing - Confirm the database URL used by the Prefect API points to the writer/primary - From the DB used by the API, run: -
SELECT pg_is_in_recovery();
— should be false on the writer - On replicas,
SELECT now() - pg_last_xact_replay_timestamp();
— shows replication lag - If you use Aurora/RDS Proxy or any LB that splits reads/writes, make sure the Prefect API is not hitting a replica for reads 2) If using PgBouncer - Set
pool_mode = session
- Ensure server reset/transaction settings are sane for SQLAlchemy apps - Restart PgBouncer and Prefect API pods after changes 3) Rule out HTTP caching in front of the API - Ensure your reverse proxy/CDN isn’t caching GET responses to Prefect API endpoints (Cache-Control headers, default TTLs like 900s/15m) - Temporarily bypass the proxy and hit the API directly to compare behavior 4) Turn on more logging - Enable debug logging on the API to confirm state write operations are succeeding immediately - Watch DB slow query/lock logs around the time of state changes To confirm, could you share: - Your Prefect version (assuming 3.x) and how you’re deploying the API (k8s, Docker, etc.) - The exact Postgres topology (managed service? primary/replicas? proxies like RDS Proxy or PgBouncer?) - The database connection string that the Prefect API uses (redact secrets), and whether any read/write splitting is in play - Whether your own SQL checks are run against the primary or a replica Docs for self‑hosting and tuning Prefect are here: https://docs.prefect.io Once we know a bit more about your DB/proxy setup, we can pinpoint whether it’s replica lag, pooling mode, or something else.
r
@Marvin PREFECT_API_DATABASE_CONNECTION_URL=postgresql+asyncpg://prefect:password@10.42.7.140:5432/prefect The prefect version is 3.4.12, the Postgres is a 2 server HA cluster using Patroni and keepalived. Prefect is connecting to the VIP advertised by the current leader elected by Patroni for both reads and writes. My sql checks are against the primary postgres server and it is the actual write that comes in 15 minutes later for every single task (plus or minus a few seconds each time)
m
✅ thought for 965.7 seconds
Thanks for the details — that rules out read-replica lag. - It is normal that the UI relies on Postgres for
task_run
state. Redis is used for ephemeral coordination; Postgres is the durable source the API/UI read from. - A consistent ~15 minute delay on every task strongly suggests a server-side batching/consumer issue, not a DB commit/lock problem. What this usually means in Prefect 3.x - Prefect’s server emits task/flow run state events and has a background service (TaskRunRecorder) that consumes those events and persists them to Postgres. - If the real-time consumer is not running, a periodic “catch-up” process will reconcile events at an interval — 15 minutes aligns exactly with that kind of backfill cadence. The result looks like: Redis shows immediate state changes, API logs “ack”, but the
task_run
rows only change in Postgres at a fixed interval. How to confirm quickly 1) Check the services status - From a host/container that can reach your Prefect server environment:
Copy code
prefect server services list-services
You should see TaskRunRecorder (and other services) listed and running. 2) If services aren’t running, start them - Run them alongside your API containers or as a separate process/pod:
Copy code
prefect server services start-services --background
- In k8s, many users run the API (uvicorn/gunicorn) in one deployment and a separate “services” deployment running the command above so the consumers always run. 3) Check server logs - Look for logs like “Starting TaskRunRecorder” and “consuming” vs only seeing periodic “processing followers”/“backfill” messages every ~15 minutes. - If you see only periodic processing, the live consumer likely isn’t running or isn’t connected to the same Redis. 4) Sanity-check DB timing - If rows bunch in groups every ~900 seconds, you’re seeing the periodic backfill:
Copy code
SELECT date_trunc('minute', updated) AS minute_bucket, count(*)
FROM task_run
WHERE updated > now() - interval '2 hours'
GROUP BY 1
ORDER BY 1 DESC;
Other things to double-check - All API pods and the services process must point to the same Redis and the same Postgres (your VIP). A mismatch can look like “acknowledged” in logs but writes never land until the backfill job runs. - If you’re launching the API with a custom uvicorn/gunicorn command, that does not automatically run services; you need the separate
prefect server services start-services
process. - Only one services process is fine (it coordinates work), but running multiple is okay as well — they use consumer groups to share the load. If TaskRunRecorder is already running - Please share a few lines of its logs around a state change — we’ll look for errors, retries, or configuration mismatches. - Also confirm the Redis and Postgres connection strings used by your API pods and the services process (redact secrets); they must match. Next step - Can you run
prefect server services list-services
in your environment and paste the output (or confirm whether TaskRunRecorder is running)? If it’s not, start the services and the UI should update in near real-time again.
r
@Marvin I confirmed the task run recorder is running on all 3 api servers and the background services server, and the env variables are all identical. I did see these errors related to the tasks in the server logs: 185511.159 | DEBUG | prefect.server.events.ordering.db - Event 'prefect.task-run.Running' (0198cdfc-2550-784f-a8f9-50d955b3c754) for 'prefect.task-run.0198cdfc-253f-7981-b740-109581009a9b' arrived before the event it follows 0198cdfc-253f-7d0a-889c-a0849d6d467c 185511.163 | DEBUG | prefect.server.events.services.event_persister - Received event: prefect.task-run.Running with id: 0198cdfc-2550-784f-a8f9-50d955b3c754 for resource: prefect.task-run.0198cdfc-253f-7981-b740-109581009a9b 185511.165 | DEBUG | prefect.server.services.task_run_recorder - Received event: prefect.task-run.Pending with id: 0198cdfc-2613-764f-9453-86145b2642bf for resource: prefect.task-run.0198cdfc-2613-75f1-bbc7-28e61bc8086e 185511.180 | DEBUG | prefect.server.services.task_run_recorder - Recorded task run state change 185511.765 | DEBUG | prefect.server.services.task_run_recorder - Received event: prefect.task-run.Running with id: 0198cdfc-279e-700b-9000-edaeccb44217 for resource: prefect.task-run.0198cdfc-2798-77b8-8cdf-2542dc4eddc8 185511.766 | INFO | prefect.server.events.ordering.memory - Event 'prefect.task-run.Running' (0198cdfc-279e-700b-9000-edaeccb44217) for 'prefect.task-run.0198cdfc-2798-77b8-8cdf-2542dc4eddc8' arrived before the event it follows 0198cdfc-2798-7547-b40e-6684863c2cc1 185511.769 | DEBUG | prefect.server.events.services.event_persister - Received event: prefect.task-run.Completed with id: 0198cdfc-291e-7404-9e59-39543d7f64a8 for resource: prefect.task-run.0198cdfc-2798-77b8-8cdf-2542dc4eddc8 185511.770 | DEBUG | prefect.server.events.ordering.db - Event 'prefect.task-run.Completed' (0198cdfc-291e-7404-9e59-39543d7f64a8) for 'prefect.task-run.0198cdfc-2798-77b8-8cdf-2542dc4eddc8' arrived before the event it follows 0198cdfc-279e-700b-9000-edaeccb44217
m
✅ thought for 133.4 seconds
This is super helpful — those logs are the smoking gun. What’s happening - The messages like “Event ‘prefect.task-run.Running’ … arrived before the event it follows …” are emitted by Prefect’s causal ordering layer for events. When the server believes it’s missing the preceding event in the causal chain for a given task run, it defers applying the later event. - In that case, the TaskRunRecorder will not persist the task_run state immediately. Instead, a periodic “lost followers”/backfill process picks these up after a timeout window — your consistent ~15 minutes aligns with that window. That’s why the Postgres row updates only every ~15 minutes and the UI looks stale until then. Why this can happen (systemically) - Out-of-order event arrival across processes/pods: With multiple API servers, different state transitions for the same task-run may be published from different pods; if the preceding event isn’t visible to the ordering module yet, the follower is temporarily held. - Missing/trimmed predecessor events: If the Redis events stream is trimmed/evicting aggressively, the “follows” event may be gone from the backlog before the consumer sees it, so the follower waits for the timeout. - Time/clock skew between nodes: Prefect uses time-ordered IDs (ULIDs) and the ‘follows’ chain; significant skew increases the odds of out-of-order arrival and missed causal references. - Multiple Redis endpoints or misrouted publishers/consumers: If all services aren’t reading/writing the same stream, ordering cannot find the predecessor. What to do next 1) Verify events pipeline health - Make absolutely sure all API pods and the services process point to the same Redis instance and DB (you already confirmed DB VIP; do the same for Redis). - Check Redis stream health and retention: - Ensure the events stream is not being trimmed too aggressively and Redis is not evicting keys due to memory pressure. If you can, inspect stream/group stats with
XINFO STREAM
and
XINFO GROUPS
for the events stream, and make sure consumer lag isn’t exploding and pending entries aren’t stuck for ~15m. 2) Sync clocks everywhere - Ensure NTP/chrony is running on workers, all API servers, and the background services host. Even small skews across many publishers increase out-of-order rates. 3) Reduce moving parts as a test - Temporarily scale API down to a single pod and keep one services process running. If the 15‑minute lag disappears, we’ve confirmed cross-pod ordering is contributing. - If it persists with one API pod, focus on Redis stream retention/evictions and clocks. 4) Double-check all services are running and seeing the same events - You’ve confirmed TaskRunRecorder is running. Also confirm EventPersister is running (you have logs from it, which is good). - Look for repeated “arrived before the event it follows” messages for the same run across time; if you see them repeatedly until exactly when the 15‑min backfill runs, that matches this diagnosis. 5) Upgrade to the latest 3.4.x - You’re on 3.4.12; please try the latest patch (3.4.14 at the time of writing) in case there are stability improvements around services/events. Full changelog: 3.4.12→3.4.14 Why the UI is “stuck” until Postgres updates - Yes — the UI reads via the API, and the API uses Postgres as the durable source of truth for run state. Redis is only for ephemeral coordination/events. If the recorder defers a state change because a follower arrived before its predecessor, you won’t see the update in the UI until it’s persisted to Postgres. If you can share: - The Redis connection string used by all API pods and the services process (redact secrets) - Whether your Redis has a maxmemory policy or explicit stream trimming - Confirmation that NTP/chrony is active and offset is near zero across API/workers - Whether the problem goes away with a single API pod
…we can zero in on which factor is causing the systematic out‑of‑order behavior and get you back to near‑real‑time task state in the UI.
r
@Marvin I verified that redis is not trimming/evicting overly aggressively, as I can see the acknowledgement from all 3 API servers and the background services pod. I am using just one redis pod during testing, there is no failover or HA for that right now. I also confirmed that chrony is running on every node and all pods are in sync. The problem does resolve with some configurations according to this matrix: API Server Replicas | Separate Background Services Pod | Messaging Broker | Works --------------------+----------------------------------+------------------+------- 1 | Yes | Redis | No 1 | No | Redis | Yes 3 | Yes | Redis | No 3 | Yes | In-Memory | Yes
m
✅ thought for 588.6 seconds
Thanks for laying out the matrix — that’s super helpful. The pattern points to a configuration mismatch that only manifests when: - Redis is the broker AND - background services run in a separate pod The most common culprit here is that only the broker was switched to Redis, while the cache remained at the default in‑memory module on one or more pods. In Prefect 3.x, the events/triggers pipeline uses both a broker and a cache module. If any API replica or the background-services pod is still using the in-memory cache while others use Redis, you can get “works in some topologies, fails in others” exactly like you’re seeing. What to check and fix - Ensure BOTH of these settings are set to the Redis implementation on ALL server pods (every API replica and the background-services pod), not just one: -
PREFECT_SERVER_EVENTS__MESSAGING_BROKER=prefect_redis.messaging
-
PREFECT_SERVER_EVENTS__MESSAGING_CACHE=prefect_redis.messaging
- Make sure the Redis connection settings are available in every pod that needs them and that the prefect-redis package is installed in those images. - The Redis plugin uses its own settings class (prefect-redis). If you’re using a URL, the simplest pattern is to provide a single URL consistently to all pods (for example via a secret/env var such as
PREFECT_REDIS__URL=redis://:password@redis:6379/0
). If you’re breaking it into host/port/user/pass, apply the same values everywhere. - If you can share what Redis env vars you’re using, I can sanity‑check them. The plugin settings are documented here: https://docs-3.prefect.io/integrations/prefect-redis/ - If you’re running background services in a separate pod, confirm they are disabled on the API pods. Otherwise, you’ll have multiple consumers competing on the same topics. - In Helm or your manifests, that typically means the API deployment runs only the API, while the background-services deployment runs only the services. If you have a custom setup, I can help verify the correct enable/disable toggles for each service. - All pods (API replicas and background services) must be on the same Prefect and prefect-redis versions. A minimal Kubernetes env example (apply to BOTH API and background-services pods)
Copy code
- name: PREFECT_SERVER_EVENTS__MESSAGING_BROKER
  value: prefect_redis.messaging
- name: PREFECT_SERVER_EVENTS__MESSAGING_CACHE
  value: prefect_redis.messaging
- name: PREFECT_REDIS__URL
  valueFrom:
    secretKeyRef:
      name: prefect-redis
      key: url
# Optional, but useful while validating
- name: PREFECT_LOGGING_LEVEL
  value: DEBUG
Why this matches your matrix - 1 API, no separate background services, Redis → Works - Even if cache stayed in-memory, both the API and services are in the same pod/process, so the memory cache accidentally “works”. - 1 API, separate background services, Redis → Fails - If one pod is on in‑memory cache and the other uses Redis, they won’t share dedup/order state; consumers will interfere and messages can get lost or mis-ordered. - 3 API, separate background services, Redis → Fails - The mismatch multiplies with more replicas. - 3 API, separate background services, In‑Memory → Works - In-memory keeps each pod isolated; some internal server paths don’t require cross‑pod broker/cache for basic orchestration. But it’s not a scalable/HA setup. If you can share: - Prefect version and prefect-redis version - The exact env vars you have on the API and background-services pods for messaging and Redis - What “No” looks like (e.g., automations don’t fire, scheduler doesn’t enqueue, logs/events stop, etc.) - A few DEBUG logs from the background-services pod (triggers/scheduler) around the time of failure …I can help pinpoint further. References - Settings reference (server.events, including messaging_broker and messaging_cache): https://docs-3.prefect.io/v3/api-ref/settings-ref - Redis integration docs: https://docs-3.prefect.io/integrations/prefect-redis/
r
@Marvin you mentioned this directive: "If you’re running background services in a separate pod, confirm they are disabled on the API pods. Otherwise, you’ll have multiple consumers competing on the same topics." which services specifically should be disabled in the API pods and can this be done via env variables in my helm chart?
m
✅ thought for 209.4 seconds
b
Yes, you start the API pods with
--no-services
in the command. This is already done if you set
backgroundServices.runAsSeparateDeployment=true
https://github.com/PrefectHQ/prefect-helm/blob/68cf7c8e6909323a69905645708a8126fc056859/charts/prefect-server/templates/deployment.yaml#L76-L78
m
Great question — yes, if you split Prefect’s background services into a separate “services” pod, you should disable all of those services on your API pods so you don’t end up with multiple consumers racing on the same topics. In Prefect 3.x, the background services you’ll typically turn off on API pods are: - Scheduler - Foreman (worker/work-queue health/heartbeat monitoring) - Cancellation cleanup - Late runs marker - Pause expirations - Repossessor - Triggers (automations/notifications) - Task run recorder - Event persister - Event logger (this one is disabled by default, but you can set it explicitly) You can control each one via environment variables. The pattern is: PREFECT_SERVER__SERVICES__<SERVICE_NAME>__ENABLED=false Here are the exact keys to set on your API pods: - PREFECT_SERVER__SERVICES__SCHEDULER__ENABLED=false - PREFECT_SERVER__SERVICES__FOREMAN__ENABLED=false - PREFECT_SERVER__SERVICES__CANCELLATION_CLEANUP__ENABLED=false - PREFECT_SERVER__SERVICES__LATE_RUNS__ENABLED=false - PREFECT_SERVER__SERVICES__PAUSE_EXPIRATIONS__ENABLED=false - PREFECT_SERVER__SERVICES__REPOSSESSOR__ENABLED=false - PREFECT_SERVER__SERVICES__TRIGGERS__ENABLED=false - PREFECT_SERVER__SERVICES__TASK_RUN_RECORDER__ENABLED=false - PREFECT_SERVER__SERVICES__EVENT_PERSISTER__ENABLED=false - PREFECT_SERVER__SERVICES__EVENT_LOGGER__ENABLED=false Then, on your dedicated “services” pod, set those to true for the services you actually want running. Example Helm values snippet (adjust to your chart’s keys for injecting env into the API deployment):
Copy code
# API pods: disable services
server:
  api:
    extraEnv:
      - name: PREFECT_SERVER__SERVICES__SCHEDULER__ENABLED
        value: "false"
      - name: PREFECT_SERVER__SERVICES__FOREMAN__ENABLED
        value: "false"
      - name: PREFECT_SERVER__SERVICES__CANCELLATION_CLEANUP__ENABLED
        value: "false"
      - name: PREFECT_SERVER__SERVICES__LATE_RUNS__ENABLED
        value: "false"
      - name: PREFECT_SERVER__SERVICES__PAUSE_EXPIRATIONS__ENABLED
        value: "false"
      - name: PREFECT_SERVER__SERVICES__REPOSSESSOR__ENABLED
        value: "false"
      - name: PREFECT_SERVER__SERVICES__TRIGGERS__ENABLED
        value: "false"
      - name: PREFECT_SERVER__SERVICES__TASK_RUN_RECORDER__ENABLED
        value: "false"
      - name: PREFECT_SERVER__SERVICES__EVENT_PERSISTER__ENABLED
        value: "false"
      - name: PREFECT_SERVER__SERVICES__EVENT_LOGGER__ENABLED
        value: "false"

# Services pod: enable the ones you want
server:
  services:
    extraEnv:
      - name: PREFECT_SERVER__SERVICES__SCHEDULER__ENABLED
        value: "true"
      - name: PREFECT_SERVER__SERVICES__FOREMAN__ENABLED
        value: "true"
      - name: PREFECT_SERVER__SERVICES__CANCELLATION_CLEANUP__ENABLED
        value: "true"
      - name: PREFECT_SERVER__SERVICES__LATE_RUNS__ENABLED
        value: "true"
      - name: PREFECT_SERVER__SERVICES__PAUSE_EXPIRATIONS__ENABLED
        value: "true"
      - name: PREFECT_SERVER__SERVICES__REPOSSESSOR__ENABLED
        value: "true"
      - name: PREFECT_SERVER__SERVICES__TRIGGERS__ENABLED
        value: "true"
      - name: PREFECT_SERVER__SERVICES__TASK_RUN_RECORDER__ENABLED
        value: "true"
      - name: PREFECT_SERVER__SERVICES__EVENT_PERSISTER__ENABLED
        value: "true"
      # Event logger is optional; keep false unless you specifically need it
      - name: PREFECT_SERVER__SERVICES__EVENT_LOGGER__ENABLED
        value: "false"
Docs for reference: - Settings reference (lists all services and fields): Prefect 3 Settings reference - Helm/Kubernetes guidance: Running Prefect Server with Helm If you can share which Helm chart (and version) you’re using, I can tailor the exact values keys (e.g., where to put extraEnv for the API vs services deployments) to match your chart’s structure.
r
@Brendan Dalpe I have backgroundServices.runAsSeparateDeployment set to true in my helm chart, but inspecting the running services in each of the pods show that for all API pods and the Background services pod all services are enabled except for 'EventLogger' and 'LogDistributor' (checked via 'prefect server services ls') So I was going to try setting these env variables manually to see if that clears up the lingering task state issues in the UI
@Brendan Dalpe Adding those env variables did seem to clear up the issue, perhaps the 'backgroundServices.runAsSeparateDeployment=true' flag in the chart isn't working as expected (or the --no-services flag)
As of now I only have the Distributor (PREFECT_API_EVENTS_STREAM_OUT_ENABLED) enabled on the API pods and everything but the EventLogger and LogDistributor enabled on my Background Services pod and it seems to be working
b
Hmmm... I would concur with you @Ryan Sharp
Copy code
PREFECT_SERVER_LOGGING_LEVEL=DEBUG prefect server start --no-services

 ___ ___ ___ ___ ___ ___ _____
| _ \ _ \ __| __| __/ __|_   _|
|  _/   / _|| _|| _| (__  | |
|_| |_|_\___|_| |___\___| |_|

Configure Prefect to communicate with the server with:

    prefect config set PREFECT_API_URL=<http://127.0.0.1:4200/api>

View the API reference documentation at <http://127.0.0.1:4200/docs>

Check out the dashboard at <http://127.0.0.1:4200>



INFO:     Started server process [91831]
INFO:     Waiting for application startup.
11:57:09.705 | DEBUG   | prefect.server.services.task_run_recorder - TaskRunRecorder started
11:57:09.706 | DEBUG   | prefect.server.events.services.event_persister - Event persister started
11:57:09.706 | DEBUG   | prefect.server.services.proactivetriggers - Starting ProactiveTriggers
11:57:09.706 | DEBUG   | prefect.server.services.proactivetriggers - About to run ProactiveTriggers...
11:57:09.706 | DEBUG   | prefect.server.services.proactivetriggers - Finished running ProactiveTriggers. Next run at 2025-08-25 16:57:14.706510+00:00
11:57:09.707 | INFO    | prefect.server.events.services.triggers - ReactiveTriggers starting with unique consumer name: reactive-triggers-mbp.localdomain-91831-05d39666
11:57:09.707 | DEBUG   | prefect.server.events.triggers - Loading automations
11:57:09.731 | INFO    | prefect.server.events.actions - Starting action message handler
11:57:09.731 | DEBUG   | prefect.server.events.services.actions - Actions started
INFO:     Application startup complete.
11:57:09.732 | INFO    | prefect.server.services.task_run_recorder - Starting periodically process followers task every 5.0 seconds
11:57:09.732 | INFO    | prefect.server.events.triggers - Starting automation change listener