Ryan Sharp
10/13/2025, 12:20 PMMarvin
10/13/2025, 12:21 PMMarvin
10/13/2025, 12:31 PMrun_deployment or API calls that may be triggering a second run.
3) Reset schedules for affected deployments
- Clear and recreate a single schedule to remove any stale/dangling schedules:
prefect deployment schedule ls --id <DEPLOYMENT_ID>
prefect deployment schedule clear --id <DEPLOYMENT_ID>
# then recreate the one schedule you want
prefect deployment schedule create --id <DEPLOYMENT_ID> --cron "<CRON>" --timezone "<TZ>"
You can also toggle the schedule off/on in the UI if you prefer.
4) Add a safety guard with a tag-based concurrency limit of 1
- Add a unique tag (e.g., only-one) to the deployment. Then set a concurrency limit for that tag so even if two runs are created, only one will execute at a time:
prefect concurrency-limit create only-one --limit 1
Tag your deployment/flow with only-one. This is the most reliable general guardrail across multiple workers.
5) If you have a dedicated queue or pool per deployment, you can also set a concurrency limit there (coarser control):
# Work queue concurrency (if using a dedicated work queue)
prefect work-queue set-concurrency-limit <QUEUE_NAME> 1
# Or pool-wide (affects all deployments on the pool)
prefect work-pool set-concurrency-limit <POOL_NAME> 1
Why not single worker per pool?
- It’s fine to run a single worker, but not required and it limits scalability/resilience. The worker claim process is atomic: only one worker should transition a scheduled run to running. If you see multiple runs, they were likely created upstream by the scheduler or an external trigger rather than due to multiple workers.
If you can share:
- Your Prefect version, server or Cloud, work pool type (e.g., Kubernetes), and an example set of duplicate flow run IDs with the same expected_start_time
…we can dig in further and confirm whether you’re hitting one of the known cases above or something new.Ryan Sharp
10/13/2025, 12:44 PMMarvin
10/13/2025, 12:44 PMMarvin
10/13/2025, 12:45 PMslug you control. Keep the slug constant for the “same” schedule across redeploys so the server treats it as an update instead of a new schedule.
- Example in SDK:
from prefect.schedules import Cron
Cron("0 2 * * *", timezone="UTC", slug="daily-2am-utc")
- If you change schedule shape or want a clean slate, clear or delete schedules before recreating
- List schedules:
prefect deployment schedule ls "<FLOW>/<DEPLOYMENT>"
- Clear all schedules for the deployment (then recreate the one(s) you want):
prefect deployment schedule clear "<FLOW>/<DEPLOYMENT>"
- Or delete specific schedule(s) by ID:
prefect deployment schedule delete --schedule-id <UUID>
- You generally don’t need to delete the deployment itself — clearing/deleting schedules is the surgical fix.
Extra guardrail
- Add a tag-based concurrency limit of 1 on the deployment so even if a duplicate run slips through, only one executes:
prefect concurrency-limit create only-one --limit 1
Then tag your deployment/flow with only-one.
Related context for the scheduler duplicates
- Redeploy/frequent-schedule changes creating duplicates: #17538
- “Dangling” schedules causing duplicates: #17703
If you’d like, share one affected deployment’s name and I can suggest exact CLI calls to list and clean up its schedules, plus a sample redeploy snippet with a stable slug to keep things idempotent going forward.Ryan Sharp
10/13/2025, 2:12 PMMarvin
10/13/2025, 2:12 PMMarvin
10/13/2025, 2:13 PMslug there:
schedule:
cron: "0 1 * * *"
timezone: "America/New_York"
slug: "daily-1am-et"
active: true
If your template uses multiple schedules, put it under each entry:
schedules:
- cron: "0 1 * * *"
timezone: "America/New_York"
slug: "daily-1am-et"
active: true
Guidance
- Keep the slug value stable across redeploys when it’s the “same” schedule. Prefect will treat it as an update instead of a new schedule.
- If you intentionally change the slug or schedule shape, consider clearing old schedules first to avoid overlaps:
prefect deployment schedule ls "<FLOW>/<DEPLOYMENT>"
prefect deployment schedule clear "<FLOW>/<DEPLOYMENT>"
# then redeploy (which will recreate the schedule with your slug)
- After deploying, verify you have exactly one schedule with the expected slug:
prefect deployment schedule ls "<FLOW>/<DEPLOYMENT>"
If you’d like, paste a small snippet of your full YAML (just the deployment section) and I’ll confirm the exact placement based on your template.Ryan Sharp
10/13/2025, 2:39 PMMarvin
10/13/2025, 2:39 PMMarvin
10/13/2025, 2:40 PMschedule1) across different deployments without conflict.
- Within a single deployment, each schedule must have a unique slug. If you have multiple schedules on one deployment, give each a different slug (e.g., nightly, hourly-healthcheck).
If each deployment only has one schedule, using the same slug like schedule1 in all your deployment YAMLs is perfectly fine and will give you the “update-in-place” behavior you want on redeploys.
Tip: Keep the slug stable for the “same” schedule across redeploys. If you change the schedule meaningfully (cron/timezone/etc.) but keep the slug, Prefect will update that existing schedule instead of creating a new one. If you’re making a major switch or want a clean slate, you can still clear schedules first:
prefect deployment schedule clear "<FLOW>/<DEPLOYMENT>"