Bart Shappee
10/11/2025, 3:37 PMMarvin
10/11/2025, 3:37 PMMarvin
10/11/2025, 3:42 PMprefect version
)
- Exact worker type and how you start it (CLI command or Python code)
- Where the CPU number comes from (docker stats, k8s metrics, etc.) and how many CPUs the container is limited to
- Worker logs around the time CPU spikes; set PREFECT_LOGGING_LEVEL=DEBUG
to get more signal
2) Confirm connectivity/auth
- If using Prefect Cloud or a remote server, ensure the worker can reliably reach PREFECT_API_URL
(no intermittent 401/5xx/timeout loops).
- Check for repeating warnings/errors in the logs; those loops often correlate with CPU.
3) Inspect/adjust worker intervals
- If you start the worker from Python, you can tune:
- prefetch_seconds
(how far ahead to fetch scheduled work)
- heartbeat_interval_seconds
(how often the worker heartbeats)
- If you use the CLI, you can see available worker settings with:
prefect config view --show-defaults --show-sources | grep -i worker
Then set the relevant settings via environment variables in your container. Slowing polling/heartbeats slightly can reduce idle churn.
4) Rule out healthcheck scraping
- If you run a probe or monitoring tool, check its frequency against the worker’s healthcheck endpoint. Overly frequent probes can add load.
5) Sanity checks
- Make sure you’re on a current 3.x release (we continuously ship performance fixes). If you need, I can cross-check against the latest notes.
- Verify no other processes are running in that same container.
If you can share your worker start command (or code), Prefect version, and a short excerpt of DEBUG logs while it’s “idle,” I’ll help pinpoint the cause quickly. If this is reproducible and not explained by configuration or environment, we can open an issue with a minimal repro for the team to investigate.