<@ULVA73B9P> After upgrading our Prefect Helm char...
# ask-marvin
m
@Marvin After upgrading our Prefect Helm chart from 2024.12.11173517 to 2025.8.21160848, we noticed that when
replicaCount=2
the same flow run is picked by multiple workers. This never happened before the upgrade. Is this a change in worker behavior, or a bug? What’s the recommended way to prevent multiple workers from claiming the same run?
m
thought for 263.9 seconds
Short answer: this is not an intended behavior change. Multiple workers should not be able to claim the same flow run in Prefect 3.x. Given that it started right after the Helm upgrade, this looks like a regression or configuration issue rather than expected behavior. To help get you unblocked and to diagnose: Recommended immediate mitigations - If possible, set the worker
replicaCount
back to 1 for the affected work queue while we investigate. - Alternatively, run multiple workers but point each replica at a distinct work queue (one queue per replica). Multiple workers on the same queue is supported, but isolating queues prevents any duplicate-claim fallout while we triage. - If you pinned a specific Prefect image previously, consider temporarily pinning the worker/API images back to the prior known-good tag that shipped with chart 2024.12.11173517 until we confirm a fix. What we need to confirm - Are these Kubernetes workers (Kubernetes work pool) and both replicas point to the same pool/queue? - Do you see two different worker names claiming the same
flow_run_id
in the logs? - Prefect versions for API and worker images currently running (image tags), plus your Helm chart values for the worker section (replicaCount, args/env, pool/queue names). - Are you self-hosting Prefect Server via the Helm chart (not Prefect Cloud)? How to capture useful diagnostics - Enable DEBUG logs on the workers and API to see claim attempts: - Set env on worker and API pods:
PREFECT_LOGGING_LEVEL=DEBUG
- Look for duplicate successful calls to the claim endpoint (both workers logging that they “claimed” the same
flow_run_id
) and share one example
flow_run_id
. - If you can, confirm PostgreSQL version (the chart may have bumped dependencies). Why this matters - Prefect’s API is designed to atomically assign a run to a single worker; duplicate claims across replicas should not occur. There is no setting you’re expected to tweak to “lock” a run—this is enforced server-side. Next steps - If you can share the info above, I can help narrow whether this is configuration-specific or a regression in the chart’s bundled images. - If it does reproducibly duplicate-claim the same run across replicas on the same queue, please open a bug with details so we can prioritize a fix: Open a Prefect issue - Helm chart repo (for values references and versions): prefect-helm - Worker concepts (for expected behavior): Workers in Prefect 3 I couldn’t find any documented behavior change that would explain duplicate claims after that chart bump, so I’m treating this as a bug until proven otherwise. If you can share the image tags and a snippet of the DEBUG logs showing two claims for the same run, I’ll help push this to the right place quickly.