<@ULVA73B9P> I have setup the background service w...
# ask-marvin
b
@Marvin I have setup the background service worker to run, we've run no jobs for the 15 hours yet I'm seeing the CPU usage grow 10 to 23 % on the container, can you explain this?
m
thought for 328.2 seconds
Short answer: an idle Prefect 3.x worker still does a little work (polling for scheduled runs and sending heartbeats), but CPU should stay in the low single digits. Seeing 10–23% with no runs for 15 hours suggests something else is spinning. What an idle worker does - Polls your work pool for runs on an interval - Sends heartbeats so the API knows it’s alive - Serves a lightweight healthcheck endpoint (for liveness/readiness) Docs: Worker healthchecks Common causes of higher/creeping CPU when “idle” - Network or auth errors causing rapid retry loops (e.g., unreachable API intermittently) - Very aggressive polling/heartbeat intervals - A scraper hammering the worker’s healthcheck/metrics endpoint - Log shipping backoffs spinning (if logs are being sent to the API and failing) - Environment-level reporting differences (e.g., container has 0.5 vCPU and “%” is relative to a single core) Quick checks 1) Share details so I can reproduce: - Prefect version (
prefect version
) - Exact worker type and how you start it (CLI command or Python code) - Where the CPU number comes from (docker stats, k8s metrics, etc.) and how many CPUs the container is limited to - Worker logs around the time CPU spikes; set
PREFECT_LOGGING_LEVEL=DEBUG
to get more signal 2) Confirm connectivity/auth - If using Prefect Cloud or a remote server, ensure the worker can reliably reach
PREFECT_API_URL
(no intermittent 401/5xx/timeout loops). - Check for repeating warnings/errors in the logs; those loops often correlate with CPU. 3) Inspect/adjust worker intervals - If you start the worker from Python, you can tune: -
prefetch_seconds
(how far ahead to fetch scheduled work) -
heartbeat_interval_seconds
(how often the worker heartbeats) - If you use the CLI, you can see available worker settings with:
Copy code
prefect config view --show-defaults --show-sources | grep -i worker
Then set the relevant settings via environment variables in your container. Slowing polling/heartbeats slightly can reduce idle churn. 4) Rule out healthcheck scraping - If you run a probe or monitoring tool, check its frequency against the worker’s healthcheck endpoint. Overly frequent probes can add load. 5) Sanity checks - Make sure you’re on a current 3.x release (we continuously ship performance fixes). If you need, I can cross-check against the latest notes. - Verify no other processes are running in that same container. If you can share your worker start command (or code), Prefect version, and a short excerpt of DEBUG logs while it’s “idle,” I’ll help pinpoint the cause quickly. If this is reproducible and not explained by configuration or environment, we can open an issue with a minimal repro for the team to investigate.