<@ULVA73B9P> What does this error mean: ```prefect...
# ask-marvin
p
@Marvin What does this error mean:
Copy code
prefect.server.events.ordering.MaxDepthExceeded: occurred=DateTime(2025, 9, 29, 14, 17, 44, 921159, tzinfo=Timezone('UTC')) event='prefect.flow-run.AwaitingConcurrencySlot' resource=Resource(root={'prefect.run-count': '0', 'prefect.state-name': 'AwaitingConcurrencySlot', 'prefect.state-type': 'SCHEDULED', 'prefect.resource.id': 'prefect.flow-run.0199918d-b595-7c1a-a5b0-e110735eb065', 'prefect.resource.name': 'mighty-boobook', 'prefect.state-message': '', 'prefect.state-timestamp': '2025-09-29T14:17:44.921159+00:00'}) related=[RelatedResource(root={'prefect.resource.id': 'prefect.flow.2bdb8e48-6d5f-4f6a-bccf-f3fba9739928', 'prefect.resource.name': 'Neuralight pipeline', 'prefect.resource.role': 'flow'}), RelatedResource(root={'prefect.resource.id': 'prefect.deployment.5439d3af-f805-48e4-908e-31179586f5a0', 'prefect.resource.name': 'neuralight_pipeline_eu-neuralight-paloma-1', 'prefect.resource.role': 'deployment'}), RelatedResource(root={'prefect.resource.id': 'prefect.work-queue.7840b31c-2562-46c5-bacc-b5b43911663d', 'prefect.resource.name': 'default', 'prefect.resource.role': 'work-queue'}), RelatedResource(root={'prefect.resource.id': 'prefect.work-pool.ccdd4c11-b3c6-4f1a-9805-cbd7204ecde5', 'prefect.resource.name': 'default', 'prefect.resource.role': 'work-pool', 'prefect.work-pool.type': 'ecs'}), RelatedResource(root={'prefect.resource.id': 'prefect.tag.auto-scheduled', 'prefect.resource.role': 'tag'})] payload={'intended': {'to': 'SCHEDULED', 'from': 'SCHEDULED'}, 'initial_state': {'name': 'AwaitingConcurrencySlot', 'type': 'SCHEDULED'}, 'validated_state': {'name': 'AwaitingConcurrencySlot', 'type': 'SCHEDULED'}} id=UUID('019995d6-2ad9-7ca4-a31e-25a3e9cb6455') follows=UUID('019995d5-beea-721c-9627-85ee92de8a0c') received=DateTime(2025, 9, 29, 14, 17, 44, 927570, tzinfo=Timezone('UTC'))
m
thought for 358.2 seconds
Short answer: it’s not your flow failing. That message is from Prefect’s event ordering logic. While your flow-run is in AwaitingConcurrencySlot (waiting on a concurrency limit), Prefect emits lots of related events. The server tries to order them causally using the “follows” chain; if that chain gets too long, it caps the search and raises MaxDepthExceeded internally. The event is still processed; you may just see slightly relaxed ordering in the timeline/logs. What it means - AwaitingConcurrencySlot: your run is queued because a concurrency limit is in effect (tag-based or work-pool level). It will start once a slot frees up. - MaxDepthExceeded: the causal chain of related events got deeper than the server’s configured maximum. It’s a protective cap, generally harmless for your run state. Typical impact - Your flow state is unaffected. The run remains scheduled and will start when a slot is available. - You might see minor out-of-order event rendering in the UI or logs. Automations usually continue to work; this is a guardrail, not a hard failure. How to resolve the underlying waiting (if runs are stuck) 1) Inspect tag-based concurrency limits
Copy code
prefect concurrency-limit ls
prefect concurrency-limit inspect <tag>
prefect concurrency-limit reset <tag>       # clears active slots
prefect concurrency-limit delete <tag>      # removes the limit
Docs: CLI: concurrency-limit, Guide: tag-based concurrency limits 2) Check work pool concurrency (your pool shows type=ecs)
Copy code
prefect work-pool inspect <pool-name>
prefect work-pool clear-concurrency-limit <pool-name>
prefect work-pool set-concurrency-limit <pool-name> <n>
If you reduce time spent in AwaitingConcurrencySlot (by raising/clearing limits or reducing parallel runs), you’ll generally see fewer deep event chains and avoid the MaxDepthExceeded notices. More on the event ordering system: API: prefect.server.events.ordering If you’d like, share: - The output of
prefect concurrency-limit ls
-
prefect work-pool inspect <your-pool>
and I can help pinpoint which limit is holding your runs.