<@ULVA73B9P> I have my prefect workers set to auto...
# ask-marvin
r
@Marvin I have my prefect workers set to autoscale as needed, however I've noticed that when multiple workers are tied to the same work pool, they occasionally each submit a separate flow run for the same deployment, leading to multiple runs when only 1 is scheduled. How can I prevent this? should i just stick to a single worker per work pool?
m
thought for 619.6 seconds
Short answer: you do not need to run a single worker per work pool. Prefect is designed for many workers (and autoscaled replicas) polling the same pool. Multiple workers should not create duplicate runs; they “claim” existing scheduled runs atomically so only one worker should win. What you’re seeing is almost certainly duplicate flow runs being created by the scheduler (not workers). This can happen after redeploys, schedule edits, or certain version transitions, and a few bugs have been fixed over time. Relevant threads: - Duplicate runs after redeploys of frequently scheduled deployments: #17538 - “Dangling” schedules causing duplicates at the same timestamp: #17703 - Idempotency key differences across versions leading to duplicates: #15477 Recommended actions 1) Upgrade Prefect to the latest 3.x - There have been several scheduler/idempotency fixes since 3.2.x. Latest release notes: Releases 2) Inspect whether the duplicates have the same expected_start_time and “Created by” in the UI - If both runs show “Created by Scheduler”, it’s scheduler duplication (not workers). If one came from an automation or code, search your codebase/automations for
run_deployment
or API calls that may be triggering a second run. 3) Reset schedules for affected deployments - Clear and recreate a single schedule to remove any stale/dangling schedules:
Copy code
prefect deployment schedule ls --id <DEPLOYMENT_ID>
   prefect deployment schedule clear --id <DEPLOYMENT_ID>
   # then recreate the one schedule you want
   prefect deployment schedule create --id <DEPLOYMENT_ID> --cron "<CRON>" --timezone "<TZ>"
You can also toggle the schedule off/on in the UI if you prefer. 4) Add a safety guard with a tag-based concurrency limit of 1 - Add a unique tag (e.g.,
only-one
) to the deployment. Then set a concurrency limit for that tag so even if two runs are created, only one will execute at a time:
Copy code
prefect concurrency-limit create only-one --limit 1
Tag your deployment/flow with
only-one
. This is the most reliable general guardrail across multiple workers. 5) If you have a dedicated queue or pool per deployment, you can also set a concurrency limit there (coarser control):
Copy code
# Work queue concurrency (if using a dedicated work queue)
   prefect work-queue set-concurrency-limit <QUEUE_NAME> 1

   # Or pool-wide (affects all deployments on the pool)
   prefect work-pool set-concurrency-limit <POOL_NAME> 1
Why not single worker per pool? - It’s fine to run a single worker, but not required and it limits scalability/resilience. The worker claim process is atomic: only one worker should transition a scheduled run to running. If you see multiple runs, they were likely created upstream by the scheduler or an external trigger rather than due to multiple workers. If you can share: - Your Prefect version, server or Cloud, work pool type (e.g., Kubernetes), and an example set of duplicate flow run IDs with the same expected_start_time …we can dig in further and confirm whether you’re hitting one of the known cases above or something new.
r
@Marvin yes it does appear to be dangling schedules rather than an issue with the multiple workers. I updated the deployments 3 days ago, and the duplicate runs only occurred for 2 days then stopped today, which may be due to the previous schedules persisting. If I am redeploying deployments, should I also be clearing the existing schedules or removing the deployment altogether first before deploying the new versions?
m
thought for 89.5 seconds
Great observation — that “it duplicated for two days then stopped” lines up with Prefect’s scheduler keeping a small number of future runs pre-generated. After your redeploy, the old schedule’s pre-generated runs executed alongside the new one until they were exhausted. You should not need to delete deployments between redeploys. The goal is to make schedules idempotent so updates replace/modify the existing schedule(s) instead of creating new/dangling ones. Recommended approach for redeploys - Use stable schedule slugs so redeploys update in place - In Prefect 3.x, schedules support a
slug
you control. Keep the slug constant for the “same” schedule across redeploys so the server treats it as an update instead of a new schedule. - Example in SDK:
Copy code
from prefect.schedules import Cron

  Cron("0 2 * * *", timezone="UTC", slug="daily-2am-utc")
- If you change schedule shape or want a clean slate, clear or delete schedules before recreating - List schedules:
Copy code
prefect deployment schedule ls "<FLOW>/<DEPLOYMENT>"
- Clear all schedules for the deployment (then recreate the one(s) you want):
Copy code
prefect deployment schedule clear "<FLOW>/<DEPLOYMENT>"
- Or delete specific schedule(s) by ID:
Copy code
prefect deployment schedule delete --schedule-id <UUID>
- You generally don’t need to delete the deployment itself — clearing/deleting schedules is the surgical fix. Extra guardrail - Add a tag-based concurrency limit of 1 on the deployment so even if a duplicate run slips through, only one executes:
Copy code
prefect concurrency-limit create only-one --limit 1
Then tag your deployment/flow with
only-one
. Related context for the scheduler duplicates - Redeploy/frequent-schedule changes creating duplicates: #17538 - “Dangling” schedules causing duplicates: #17703 If you’d like, share one affected deployment’s name and I can suggest exact CLI calls to list and clean up its schedules, plus a sample redeploy snippet with a stable slug to keep things idempotent going forward.
r
@Marvin i am deploying and redeploying via yaml file, should I include the slug somewhere in this template? schedule: cron: "0 1 * * *" timezone: "America/New_York"
m
thought for 39.9 seconds
Yes — include a stable slug on the schedule so redeploys update it in-place instead of creating a new/dangling schedule. If your deployment YAML supports a single schedule block, add
slug
there:
Copy code
schedule:
  cron: "0 1 * * *"
  timezone: "America/New_York"
  slug: "daily-1am-et"
  active: true
If your template uses multiple schedules, put it under each entry:
Copy code
schedules:
  - cron: "0 1 * * *"
    timezone: "America/New_York"
    slug: "daily-1am-et"
    active: true
Guidance - Keep the slug value stable across redeploys when it’s the “same” schedule. Prefect will treat it as an update instead of a new schedule. - If you intentionally change the slug or schedule shape, consider clearing old schedules first to avoid overlaps:
Copy code
prefect deployment schedule ls "<FLOW>/<DEPLOYMENT>"
  prefect deployment schedule clear "<FLOW>/<DEPLOYMENT>"
  # then redeploy (which will recreate the schedule with your slug)
- After deploying, verify you have exactly one schedule with the expected slug:
Copy code
prefect deployment schedule ls "<FLOW>/<DEPLOYMENT>"
If you’d like, paste a small snippet of your full YAML (just the deployment section) and I’ll confirm the exact placement based on your template.
r
@Marvin are the slugs unique to each deployment? could I just set a consistent slug, something like 'schedule1' for every deployment? or does it need to be a unique slug for each schedule type?
m
thought for 36.3 seconds
Slugs are scoped to a single deployment. - You can reuse the same slug value (e.g.,
schedule1
) across different deployments without conflict. - Within a single deployment, each schedule must have a unique slug. If you have multiple schedules on one deployment, give each a different slug (e.g.,
nightly
,
hourly-healthcheck
). If each deployment only has one schedule, using the same slug like
schedule1
in all your deployment YAMLs is perfectly fine and will give you the “update-in-place” behavior you want on redeploys. Tip: Keep the slug stable for the “same” schedule across redeploys. If you change the schedule meaningfully (cron/timezone/etc.) but keep the slug, Prefect will update that existing schedule instead of creating a new one. If you’re making a major switch or want a clean slate, you can still clear schedules first:
Copy code
prefect deployment schedule clear "<FLOW>/<DEPLOYMENT>"