Melek Alan
08/28/2025, 12:42 PMreplicaCount=2
the same flow run is picked by multiple workers.
This never happened before the upgrade.
Is this a change in worker behavior, or a bug? What’s the recommended way to prevent multiple workers from claiming the same run?Marvin
08/28/2025, 12:42 PMMarvin
08/28/2025, 12:47 PMreplicaCount
back to 1 for the affected work queue while we investigate.
- Alternatively, run multiple workers but point each replica at a distinct work queue (one queue per replica). Multiple workers on the same queue is supported, but isolating queues prevents any duplicate-claim fallout while we triage.
- If you pinned a specific Prefect image previously, consider temporarily pinning the worker/API images back to the prior known-good tag that shipped with chart 2024.12.11173517 until we confirm a fix.
What we need to confirm
- Are these Kubernetes workers (Kubernetes work pool) and both replicas point to the same pool/queue?
- Do you see two different worker names claiming the same flow_run_id
in the logs?
- Prefect versions for API and worker images currently running (image tags), plus your Helm chart values for the worker section (replicaCount, args/env, pool/queue names).
- Are you self-hosting Prefect Server via the Helm chart (not Prefect Cloud)?
How to capture useful diagnostics
- Enable DEBUG logs on the workers and API to see claim attempts:
- Set env on worker and API pods: PREFECT_LOGGING_LEVEL=DEBUG
- Look for duplicate successful calls to the claim endpoint (both workers logging that they “claimed” the same flow_run_id
) and share one example flow_run_id
.
- If you can, confirm PostgreSQL version (the chart may have bumped dependencies).
Why this matters
- Prefect’s API is designed to atomically assign a run to a single worker; duplicate claims across replicas should not occur. There is no setting you’re expected to tweak to “lock” a run—this is enforced server-side.
Next steps
- If you can share the info above, I can help narrow whether this is configuration-specific or a regression in the chart’s bundled images.
- If it does reproducibly duplicate-claim the same run across replicas on the same queue, please open a bug with details so we can prioritize a fix: Open a Prefect issue
- Helm chart repo (for values references and versions): prefect-helm
- Worker concepts (for expected behavior): Workers in Prefect 3
I couldn’t find any documented behavior change that would explain duplicate claims after that chart bump, so I’m treating this as a bug until proven otherwise. If you can share the image tags and a snippet of the DEBUG logs showing two claims for the same run, I’ll help push this to the right place quickly.