<@ULVA73B9P> after renaming a deployment, the work...
# ask-marvin
k
@Marvin after renaming a deployment, the worker is crashing with "404 not found' for url api.prefect.cloud/.../deployments/old_deployment_uuid how to fix?
m
thought for 393.7 seconds
thought for 391.7 seconds
k
?
@Marvin review above
m
thought for 18.6 seconds
I’m happy to help—could you share what you’d like me to review? Please paste the relevant content here. Helpful details to include: - Prefect version (3.x vs 2.x) and environment (Prefect Cloud vs self-hosted) - What you’re running (flow code, deployment config, worker/work pool setup) - Any CLI commands you used and their output - Error messages or logs (worker/flow run) - What you expected vs what happened Tip: wrap code/logs in triple backticks like this:
Copy code
# paste code or logs here
k
@Marvin after renaming a deployment, the worker is crashing with "404 not found' for url api.prefect.cloud/.../deployments/old_deployment_uuid how to fix?
m
thought for 318.3 seconds
thought for 395.7 seconds
thought for 339.2 seconds
k
all we did was rename the flow function which is used as deployment name from
build_deployment
m
Short answer: your worker is picking up flow runs that still reference the old deployment ID. Workers don’t “cache” deployments; each flow run includes the
deployment_id
it was created with. If that deployment was deleted/replaced during the rename, those runs will 404. How to fix 1) Cancel stale runs that point to the old deployment - In the UI: Filter Flow Runs by the old deployment name/ID and state SCHEDULED/PENDING, then Cancel them. - Or via CLI:
Copy code
prefect deployment inspect "<FLOW_NAME>/<OLD_DEPLOYMENT_NAME>"    # see the old deployment ID
prefect flow-run ls --state SCHEDULED --flow-name "<FLOW_NAME>"   # find scheduled runs
prefect flow-run cancel <FLOW_RUN_ID>                             # cancel each stale run
2) Remove or disable the old deployment and its schedules - UI: Open the old deployment, turn off/delete its schedules, or delete the deployment. - CLI:
Copy code
prefect deployment ls
prefect deployment delete "<FLOW_NAME>/<OLD_DEPLOYMENT_NAME>"
3) Make sure the new deployment is active and scheduled - Verify the new deployment exists and has schedules enabled. - Trigger a test run to confirm:
Copy code
prefect deployment run "<FLOW_NAME>/<NEW_DEPLOYMENT_NAME>" --watch
4) Optional: restart your worker so logs clearly reflect the current state
Copy code
prefect worker start --pool "<YOUR_WORK_POOL>"
Notes and prevention - If you edited the deployment’s name in the UI, the UUID should remain the same; a 404 usually means the old deployment was deleted. If you “renamed” by deploying a new name from code, that creates a new deployment with a new UUID; any previously scheduled runs still point to the old one and will fail until canceled. - Update any automations that “Run a deployment” to point at the new deployment (they reference deployments by ID). If you can share how you performed the rename (UI edit vs. re-deploy with a new name) and whether you deleted the old deployment, I can tailor exact steps or provide a quick script to bulk-cancel the stale runs. Helpful refs: - Deployments API - Caching overview (context on object identity vs. names)
Short answer: your worker is pulling a flow run that still points to a deployment that no longer exists. Renaming via
prefect deploy
with a new name creates a new deployment (new UUID). Any old scheduled/queued runs or automations targeting the old deployment will 404 when the worker tries to start them. How to fix 1) Confirm what changed - If you created a new deployment with a new name: the old deployment’s runs/schedules/automations still reference the old deployment ID. - If you truly just renamed in the UI: the ID should be unchanged; a 404 usually means the old deployment was deleted and there are leftover runs. 2) Clean up old runs - In the UI: filter Flow Runs by the old deployment and cancel/delete any SCHEDULED/PENDING/LATE runs. - Or via CLI:
Copy code
prefect deployment ls
  prefect deployment inspect "your-flow-name/your-new-deployment-name"
  prefect flow-run ls --state SCHEDULED --state PENDING --limit 200
  # identify runs for the old deployment and cancel/delete them
  prefect flow-run cancel <run_id>
  prefect flow-run delete <run_id>
CLI help for these commands: -
prefect deployment
commands: run
prefect deployment --help
-
prefect flow-run
commands: run
prefect flow-run --help
3) Update schedules and automations to point at the new deployment - If you use a schedule on the old deployment, disable it and add a schedule to the new one:
Copy code
prefect deployment schedule --help
- If you use Automations that “Create flow run” for a specific deployment, edit them to target the new deployment. In Cloud UI: Automations > Edit > change the “Create flow run” action to the new deployment. - REST references: Deployment API, Automations API 4) Restart the worker - After cleaning up orphaned runs and updating references, restart your worker so it stops encountering the stale work item:
Copy code
prefect worker start --pool <your-pool> --name <your-worker-name>
Why this happens - Flow runs store
deployment_id
. If the old deployment is deleted (common when “renaming” by re-deploying with a new name), any previously scheduled/queued runs and automations continue to reference the old
deployment_id
, causing 404s. If this keeps crashing the worker instead of just failing a run, consider upgrading the worker environment to the latest Prefect 3.x and share a snippet of the worker logs—we’d like to make sure you’re not hitting a bug. Quick follow-ups to help me tailor the fix: - Did you rename in the UI, or did you run
prefect deploy
with a new
--name
(creating a new deployment)? - Did you later delete the old deployment? - Can you share the worker version (
prefect version
) and whether the 404 appears tied to specific run IDs in the logs?
Short answer: a rename shouldn’t change a deployment’s ID in Prefect 3.x. A 404 for /deployments/<old_uuid> usually means the worker is trying to start a flow run that still points to a deployment that was deleted or recreated (new ID), not just renamed. A few checks and fixes: - Did you truly “rename” the deployment in-place (UI or API), or did you delete and re-deploy with a new name? If the latter, existing scheduled runs from the old deployment will still reference the old (now-missing) deployment_id and can trigger 404s when the worker fetches deployment details. - In Prefect Cloud UI, pause/disable the old deployment (if it still exists), and cancel/delete any Scheduled runs created by it. If it’s already deleted, go to Flow Runs, filter by your flow and state=Scheduled, then bulk-cancel/delete those runs so the worker doesn’t try to pick them up. - Restart the worker after cleanup so it doesn’t keep retrying the offending run(s). - Recreate the schedule on the new deployment, and ensure any Automations or external triggers that “Run Deployment” now point to the new deployment. If you prefer CLI for cleanup, you can list and delete scheduled runs by flow: - List scheduled runs:
Copy code
prefect flow-run ls --flow-name "<your-flow-name>" --state SCHEDULED --limit 200
- Delete specific runs:
Copy code
prefect flow-run delete <flow_run_id>
Why this happens: - Scheduled flow runs reference a deployment_id. If that deployment is deleted (or replaced by “renaming” via re-deploy), those runs still point to the old UUID. When the worker picks one up, it asks the API for the deployment and gets 404. A true rename should keep the same deployment_id—so a 404 strongly suggests the original deployment was removed/replaced. Docs: - REST API for reading a deployment: Read Deployment API - Workers overview: Workers in Prefect 3 If you only renamed and did not delete the deployment, a 404 on the old UUID would be unexpected—please share: - How you renamed (UI vs API vs code/re-deploy) - Prefect version (server/worker) - Worker log snippet around the 404 - Whether there are Scheduled runs created before the change We can dig in further with that info. Also, if you’re not on the latest Prefect 3.x, please upgrade—there have been fixes around worker robustness, and the worker should not crash outright on a missing deployment.
Short answer: your worker is picking up a flow run that still points to a deployment that no longer exists. A simple rename normally does not change a deployment’s ID in Prefect 3.x, so a 404 for the old UUID usually means the old deployment was deleted or a new deployment was created under a different ID, while scheduled runs or automations are still referencing the old one. What to do now 1) Confirm what exists - In the UI: open the Deployment you intend to run and copy its ID. Also search for any old deployment with the previous name. - CLI:
Copy code
prefect deployment ls
2) Clean up orphaned runs that reference the old deployment - In the UI: go to the Flow’s Runs list, filter to Scheduled/Pending, open a run and check the “Deployment” it references. Cancel any runs that reference the old deployment. - CLI (cancel by ID):
Copy code
prefect flow-run ls
prefect flow-run cancel <flow-run-id>
3) Update any automations or triggers - If you have Automations with a “Run Deployment” action, re-open them and reselect the current deployment. They store the deployment by ID, not just by name. 4) Ensure the schedule exists on the new/current deployment - If you re-deployed with a different deployment name, the new deployment has a new ID and its own schedule. Make sure that schedule is enabled and the worker is polling the right work pool. 5) Restart the worker (and consider upgrading) - Restart the worker process to clear any stuck job that was crashing on the 404. - If you’re not on the latest Prefect, upgrade:
Copy code
pip install -U prefect
Why this happens - Workers fetch flow runs from your work pool. Each flow run references a deployment ID. If that deployment was deleted or replaced (e.g., you re-deployed under a new name and removed the old one), any pre-created scheduled runs still reference the now-missing deployment ID, causing GET /deployments/<old_uuid> to return 404 and the worker to error. Tips to prevent next time - Prefer updating a deployment in place (same deployment name) so its ID stays the same when you redeploy: - Use
prefect deploy
with the same deployment name to update, not create a new one. - Before deleting a deployment, cancel its pending/scheduled runs and update any automations that target it. References - REST API for reading deployments: Read deployment - Schedules: Schedules - Automations: Automations If you can share: - How you “renamed” the deployment (edited name in UI vs. re-ran
prefect deploy
with a new name) - Prefect version, worker type, and a snippet of the worker logs around the crash …I can pinpoint whether this is just orphaned runs or something else.
Thanks for the details — a 404 on /deployments/<old_deployment_uuid> means the worker picked up flow runs that still reference a deployment object that no longer exists. Renaming alone should not change a deployment’s UUID in Prefect 3.x, so this typically happens if: - The “rename” actually created a new deployment (new UUID) and the old one was deleted, or - There were already-scheduled/pending flow runs pointing at the old deployment when it was removed/changed. How to fix quickly 1) Cancel or delete any flow runs that reference the old deployment ID - In the UI: filter flow runs by that deployment (or visually check the run details) and cancel any Scheduled/Pending/Late runs that show the old deployment UUID in their details. - Once those are gone, your worker will stop trying to fetch the old deployment. 2) Stop new runs from being created for the old deployment - If the old deployment still exists and has a schedule, pause or clear it. From the CLI:
Copy code
prefect deployment schedule clear "<flow-name>/<deployment-name>"
Then re-create the schedule on the renamed deployment if needed:
Copy code
prefect deployment schedule create "<flow-name>/<new-deployment-name>" --cron "..."
You can list deployments to verify names/IDs:
Copy code
prefect deployment ls --flow-name "<flow-name>"
3) Start new runs from the renamed deployment - Use the UI “Run” button on the renamed deployment or:
Copy code
prefect deployment run "<flow-name>/<new-deployment-name>"
4) Optional: restart the worker - Just to clear any transient backoff after the 404s. How to avoid this next time - If you need to change the deployment name, pause the schedule first, let any existing scheduled runs finish (or cancel them), then rename, then resume the schedule. - Double-check the deployment’s ID after renaming: in the UI, copy the deployment ID and make sure it matches what your worker logs reference. If IDs differ, you effectively created/referenced a different deployment. If you’re still seeing 404s after clearing/cancelling the old runs, please share: - Prefect version (3.x?) and whether you’re on Prefect Cloud - The exact deployment ID shown on the renamed deployment’s page - A snippet of the worker logs showing the 404 with the old deployment UUID Happy to help walk through your workspace specifics to get this fully cleaned up.