José Agretti
09/25/2025, 12:32 PM12:15:56.924 | ERROR | MainThread | prefect._internal.concurrency - exception calling callback for <Future at 0x7f58ca4fc140 state=cancelled>
File "/app/.venv/lib/python3.12/site-packages/prefect/_internal/concurrency/calls.py", line 215, in _invoke_callbacks
callback(self)
File "/app/.venv/lib/python3.12/site-packages/prefect/concurrency/_leases.py", line 64, in handle_lease_renewal_failure
exc = future.exception()
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/concurrent/futures/_base.py", line 483, in exception
raise CancelledError()
Do you know what this is?Marvin
09/25/2025, 12:32 PMMarvin
09/25/2025, 12:35 PMfuture.exception()
, but for a cancelled future that raises CancelledError
, which Prefect then logs as:
prefect._internal.concurrency - exception calling callback for <Future ... state=cancelled>
This does not affect your flow outcome; it’s just noisy. It’s been reported by others (often on Python 3.12/3.13) and we’re tracking it:
- Future.CancelledError when flow finished
- Exception in callback handle_lease_renewal_failure
How to reduce/suppress the noise:
- Upgrade to the latest Prefect 3.x; we’re improving shutdown behavior around concurrency leases.
- If it still shows up and you want to silence it, you can raise the level for that specific logger:
Option A: in code (early in your process)
import logging
logging.getLogger("prefect._internal.concurrency").setLevel(logging.CRITICAL)
Option B: via a logging config file (set PREFECT_LOGGING_SETTINGS_PATH
to this YAML)
version: 1
disable_existing_loggers: false
formatters:
simple:
format: '%(asctime)s | %(levelname)s | %(name)s - %(message)s'
handlers:
console:
class: logging.StreamHandler
stream: <ext://sys.stdout>
formatter: simple
loggers:
prefect._internal.concurrency:
level: CRITICAL
handlers: [console]
propagate: false
root:
level: INFO
handlers: [console]
A couple quick questions to confirm:
- Which Prefect version and Python version are you running?
- Are you using global/tag-based concurrency limits or task concurrency
settings in these flows?
That will help me confirm if you’re hitting the known concurrency-lease shutdown path or if there’s something else at play.José Agretti
09/25/2025, 12:38 PMERROR | prefect.FlowRunCancellingObserver - Error consuming events"
at the end of running a flow, followed by
File "/app/.venv/lib/python3.12/site-packages/prefect/runner/_observers.py", line 56, in __aexit__
await self._consumer_task
File "/app/.venv/lib/python3.12/site-packages/prefect/runner/_observers.py", line 27, in _consume_events
async for event in self._events_subscriber:
File "/app/.venv/lib/python3.12/site-packages/prefect/events/clients.py", line 642, in __anext__
message = orjson.loads(await self._websocket.recv())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/websockets/asyncio/connection.py", line 322, in recv
raise self.protocol.close_exc from self.recv_exc
websockets.exceptions.ConnectionClosedError: no close frame received or sent
Marvin
09/25/2025, 12:38 PMMarvin
09/25/2025, 12:39 PM_observers.py
and events/clients.py
.
This does not affect your flow result and typically occurs right at shutdown.
Why it happens:
- Normal end-of-process teardown closes the socket while the consumer is still awaiting a message.
- Network layers/load balancers/proxies (and managed runtimes like Vertex AI) sometimes drop connections without a close frame when the container exits or on idle timeouts.
- It’s intermittent; you’ll notice it more in short-lived containers or environments that aggressively terminate connections.
Mitigations:
- Upgrade to the latest Prefect 3.x; we’ve been improving shutdown handling around observers/events.
- If it only ever shows at the end and results are fine, it’s safe to ignore. To silence the noise:
- In code:
import logging
logging.getLogger("prefect.FlowRunCancellingObserver").setLevel(logging.CRITICAL)
- Or via a logging config (set PREFECT_LOGGING_SETTINGS_PATH
):
version: 1
disable_existing_loggers: false
handlers:
console:
class: logging.StreamHandler
stream: <ext://sys.stdout>
loggers:
prefect.FlowRunCancellingObserver:
level: CRITICAL
handlers: [console]
propagate: false
root:
level: INFO
handlers: [console]
- If you see it mid-run (not just at the very end), check for:
- Network/proxy timeouts or middleboxes closing WebSockets prematurely (raise idle/connection timeouts if you can).
- Container runtime sending early SIGTERM/SIGKILL on job completion hooks.
- Worker or flow run environment losing outbound connectivity.
References:
- Observer implementation: prefect/runner/_observers.py
- Events WebSocket client: prefect/events/clients.py
- WebSockets exception docs: websockets.exceptions.ConnectionClosedError
- Prefect logging customization: Logging customization
Quick checks I’d love to confirm:
- Prefect and Python versions you’re on.
- Whether you’re on Prefect Cloud or a self-hosted Server.
- Any proxies/load balancers in front of your workers (Vertex AI networking, sidecars, etc.).
If you share those, I can suggest tighter remediation if needed.José Agretti
09/25/2025, 3:37 PM--no-services
flagMarvin
09/25/2025, 3:37 PMMarvin
09/25/2025, 3:41 PM