Syméon del Marmol
08/31/2025, 1:58 PMPath /app/s3-bucket/tasks-scheduling/757985d087234c36913abb280345690f does not exist.
I don't get why it tries to access the task scheduling storage at that moment, and why that object does not exist.Marvin
08/31/2025, 1:58 PMMarvin
08/31/2025, 2:04 PM.delay()
. The submitter writes the call parameters there, then the background worker reads them and usually deletes the record. If the background run fails or the record was already cleaned up, later reads can hit “Path … does not exist.” It’s noisy, but it points to the scheduling hand-off, not your flow result storage.
What’s happening under the hood
- When you call .delay()
, Prefect writes the parameters for that background run to the “task scheduling storage” (via prefect.task_worker.store_parameters
). The path typically looks like <base>/tasks-scheduling/<uuid>
.
- The background worker process reads those parameters (prefect.task_worker.read_parameters
) then removes them. The storage is intentionally ephemeral.
- On failures, some code paths still try to check this record (e.g., finalization/status polling). If it’s already gone, you’ll see “Path … does not exist.”
Why it might not exist for you
- The record was already consumed and cleaned up by the worker before a later read attempted to access it.
- Mismatch in scheduling storage between the submitter and the worker. Your path /app/s3-bucket/...
looks like a local mount. If the background worker runs in another process/container/host that doesn’t see that same path, it will fail to read it and the background run will fail quickly with that error.
How to make this robust
- Use a shared, truly-remote scheduling store that both the submitter and worker can see. For example, configure a RemoteFileSystem/S3-based block and set it as the default task scheduling storage in both environments (e.g., PREFECT_TASKS__SCHEDULING__DEFAULT_STORAGE_BLOCK
pointing to your filesystem block). Avoid local paths like /app/s3-bucket
unless both processes share the exact same mount.
- If you’re using a local/mounted path on purpose, ensure the background worker runs in the same environment with the same mount and permissions.
- For debugging, you can temporarily keep scheduling artifacts by disabling cleanup of failed submissions: set PREFECT_TASKS__SCHEDULING__DELETE_FAILED_SUBMISSIONS=false
. This won’t prevent normal cleanup after consumption but can help keep around artifacts when submission fails early.
A couple of checks I recommend
- Confirm whether your submitter and the background worker share the same path/base for task scheduling storage.
- Verify any relevant settings/env vars are identical across both processes: PREFECT_TASKS__SCHEDULING__DEFAULT_STORAGE_BLOCK
(and any filesystem credentials if using S3 via fsspec).
- If you can share a small snippet showing how you call .delay()
for the flow and any custom storage configuration, I can sanity-check it.
Optional: staying up to date
- You’re on 3.4.14. While I don’t see a specific fix targeting this in the next patch, it’s generally good to stay current: 3.4.15 release notes.
If you can confirm:
- Are the .delay()
submitter and the background run in the same container/host?
- Are you intentionally using a local path (/app/s3-bucket
) for scheduling storage, or is that an S3 mount?
- Any custom settings for task scheduling storage or result storage?
With those details, I can give you exact config steps to eliminate the error.