Syméon del Marmol
09/17/2025, 6:55 PMEncountered exception during execution: ValueError('Path /task-storage/<id> does not exist.')
How can I prevent that and simply forward the original error ?Marvin
09/17/2025, 6:55 PMMarvin
09/17/2025, 7:00 PMawait future.result_async()
will re-raise the original exception instead of a storage error.
Why this happens
- Background tasks (.delay()
) write their input payload to the “task scheduling storage”, and write results/exceptions to “result storage”.
- By default, these use local disk paths (e.g., /task-storage/...
). If your flow runner and task workers aren’t on the same host or don’t share that path, reads fail with “Path /task-storage/<id> does not exist”.
- When the result/exception can’t be retrieved, that storage error masks the original error.
How to fix
1) Configure a shared storage for task scheduling
- Set PREFECT_TASKS_SCHEDULING_DEFAULT_STORAGE_BLOCK
to a storage block that all machines can access (NFS mount via LocalFileSystem, or S3/GCS/Azure).
- Create the block, then set the env var to the block reference shown by prefect block ls
as <block-type>/<block-name>
.
Example: shared LocalFileSystem (e.g., an NFS mount)
from prefect.filesystems import LocalFileSystem
LocalFileSystem(basepath="/mnt/shared/prefect/task-storage").save("shared-task-scheduling", overwrite=True)
LocalFileSystem(basepath="/mnt/shared/prefect/results").save("shared-results", overwrite=True)
Set on every flow/worker environment:
export PREFECT_TASKS_SCHEDULING_DEFAULT_STORAGE_BLOCK="local-file-system/shared-task-scheduling"
export PREFECT_RESULTS_DEFAULT_STORAGE_BLOCK="local-file-system/shared-results"
export PREFECT_RESULTS_PERSIST_BY_DEFAULT="true"
Example: generic remote filesystem (fsspec) such as S3
from prefect.filesystems import RemoteFileSystem
RemoteFileSystem(
basepath="<s3://my-bucket/prefect/task-storage>"
).save("s3-task-scheduling", overwrite=True)
RemoteFileSystem(
basepath="<s3://my-bucket/prefect/results>"
).save("s3-results", overwrite=True)
Set on every flow/worker environment:
export PREFECT_TASKS_SCHEDULING_DEFAULT_STORAGE_BLOCK="remote-file-system/s3-task-scheduling"
export PREFECT_RESULTS_DEFAULT_STORAGE_BLOCK="remote-file-system/s3-results"
export PREFECT_RESULTS_PERSIST_BY_DEFAULT="true"
Note: If you prefer native integration blocks (S3/GCS/Azure), create those blocks (via UI or code) and set the env var to their exact slug/name as shown by prefect block ls
.
2) Persist results so the original exception is retrievable
- Global: PREFECT_RESULTS_PERSIST_BY_DEFAULT=true
- Per-task/flow: set persist_result=True
in your @task(...)
or @flow(...)
decorators, and optionally provide result_storage
to override the default.
Once these are set, await my_future.result_async()
will re-raise the original exception from the remote result storage instead of failing on a missing local path.
Docs for more detail
- Background tasks: Background tasks
- Results and result storage: Results and storage
- Settings reference (env vars): Settings reference
Quick sanity checks
- Ensure both the flow runner and all task workers have the same environment variables set.
- If using LocalFileSystem, verify the mount/path exists in every container/host.
- If you still see path errors only on failures, double-check that PREFECT_RESULTS_PERSIST_BY_DEFAULT=true
(or task-level persist_result=True
) is in place so exceptions are persisted as well.
Would you share:
- Whether your flow runner and task workers run on different machines/containers?
- Which storage backend you prefer (shared volume, S3, GCS, Azure)?
I can provide a tailored snippet for your setup.Syméon del Marmol
09/17/2025, 7:05 PMMarvin
09/17/2025, 7:05 PMMarvin
09/17/2025, 7:06 PM/mnt/prefect
in every container that runs flows or task workers.
- Ensure UIDs/permissions allow read/write from all containers.
2) Create LocalFileSystem blocks for scheduling and results that use that shared path
Run this once (e.g., in a small init container or locally with your Prefect profile):
from prefect.filesystems import LocalFileSystem
LocalFileSystem(basepath="/mnt/prefect/task-storage").save(
"shared-task-scheduling", overwrite=True
)
LocalFileSystem(basepath="/mnt/prefect/results").save(
"shared-results", overwrite=True
)
3) Point Prefect settings to those blocks and persist results
Ensure these env vars are set in every flow runner and worker container:
# Make background task scheduling payloads go to the shared mount
PREFECT_TASKS_SCHEDULING_DEFAULT_STORAGE_BLOCK=local-file-system/shared-task-scheduling
# Make results/exceptions go to the shared mount and persist them
PREFECT_RESULTS_DEFAULT_STORAGE_BLOCK=local-file-system/shared-results
PREFECT_RESULTS_PERSIST_BY_DEFAULT=true
Optionally, you can set persist_result=True
on specific tasks/flows too.
4) Example docker-compose sketch
services:
flow-runner:
image: your-image
volumes:
- ./shared:/mnt/prefect
environment:
- PREFECT_API_URL=...
- PREFECT_TASKS_SCHEDULING_DEFAULT_STORAGE_BLOCK=local-file-system/shared-task-scheduling
- PREFECT_RESULTS_DEFAULT_STORAGE_BLOCK=local-file-system/shared-results
- PREFECT_RESULTS_PERSIST_BY_DEFAULT=true
task-worker:
image: your-image
deploy:
replicas: 1 # scale this up when you want to fan out
volumes:
- ./shared:/mnt/prefect
environment:
- PREFECT_API_URL=...
- PREFECT_TASKS_SCHEDULING_DEFAULT_STORAGE_BLOCK=local-file-system/shared-task-scheduling
- PREFECT_RESULTS_DEFAULT_STORAGE_BLOCK=local-file-system/shared-results
- PREFECT_RESULTS_PERSIST_BY_DEFAULT=true
5) Verify your config quickly
From inside a container, you can sanity-check what Prefect resolved:
from prefect.results import get_default_result_storage, get_or_create_default_task_scheduling_storage
print("Result storage:", get_default_result_storage())
print("Task scheduling storage:", get_or_create_default_task_scheduling_storage())
Both should show LocalFileSystem pointing at your /mnt/prefect/...
paths.
After this, when a background task fails, await future.result_async()
should re-raise the original exception (stored in shared result storage) instead of failing with “Path /task-storage/<id> does not exist”.
Docs:
- Background tasks: Background tasks
- Results and storage: Results and storage
- Settings reference: Settings reference
If you share your docker-compose snippet (services + volumes + env) and the exact mount path you use, I can double-check for any mismatches.Syméon del Marmol
09/17/2025, 7:09 PM- PREFECT_LOCAL_STORAGE_PATH=/task-storage
- PREFECT_RESULTS_PERSIST_BY_DEFAULT=true
Marvin
09/17/2025, 7:09 PMMarvin
09/17/2025, 7:10 PM/task-storage
in every container.
- Create two LocalFileSystem blocks that point at directories on that mount.
- Point Prefect to those blocks via env vars.
- Ensure results persist so exceptions can be retrieved and re-raised.
One-time setup (run once anywhere with your Prefect profile):
from prefect.filesystems import LocalFileSystem
LocalFileSystem(basepath="/task-storage/task-scheduling").save(
"shared-task-scheduling", overwrite=True
)
LocalFileSystem(basepath="/task-storage/results").save(
"shared-results", overwrite=True
)
Set these env vars on every container (flow runner + all task workers):
PREFECT_TASKS_SCHEDULING_DEFAULT_STORAGE_BLOCK=local-file-system/shared-task-scheduling
PREFECT_RESULTS_DEFAULT_STORAGE_BLOCK=local-file-system/shared-results
PREFECT_RESULTS_PERSIST_BY_DEFAULT=true
Notes
- PREFECT_LOCAL_STORAGE_PATH
is not the right knob in Prefect 3 for this. If you want to force a local path fallback for results, the 3.x setting is PREFECT_RESULTS_LOCAL_STORAGE_PATH
, but for background tasks you still must set PREFECT_TASKS_SCHEDULING_DEFAULT_STORAGE_BLOCK
(or explicitly pass a storage to .delay()
via task options).
- With the above, await future.result_async()
will re-raise the original exception stored in /task-storage/results/...
instead of failing with “Path /task-storage/<id> does not exist.”
If you prefer, you can also set storage at the decorator level:
from prefect import task, flow
@task(persist_result=True, result_storage="local-file-system/shared-results")
def my_task(...): ...
@flow(result_storage="local-file-system/shared-results")
def my_flow(...): ...
Docs for reference:
- Background tasks: Background tasks
- Results and storage: Results and storage
- Settings reference: Settings reference
If you paste your docker-compose service snippets (volumes + env) I can double-check path consistency across containers.Syméon del Marmol
09/18/2025, 1:19 PMraise ValueError(f"Path {path} does not exist.")\n', 'ValueError: Path /task-storage/results/7ce38f97f64a4c4dad0d1fa84f468240 does not exist.\n']
Marvin
09/18/2025, 1:20 PMMarvin
09/18/2025, 1:26 PM/task-storage/results/<id>
. If your flow run and task workers are in different containers/hosts, that local path must be a shared volume or network drive mounted at the exact same path on all processes. If it isn’t, the worker writes the file but the flow process can’t see it — hence the “does not exist” error.
How to fix it
Pick one of these patterns and apply it consistently for both the flow runner and all task workers:
1) Use a shared local filesystem (e.g., Docker/K8s volume, NFS/SMB)
- Ensure the folder exists and is shared/mounted at the same path in all containers/hosts, e.g. mount a volume to /task-storage
.
- Make the result storage block point to that path and ensure both flow and workers use the same block.
Two reliable ways to ensure tasks use your block:
- Set it explicitly on the task (works great with your .with_options(...)
pattern):
from prefect import task
from prefect.filesystems import LocalFileSystem
shared_fs = LocalFileSystem(basepath="/task-storage") # shared volume mount
@task # or your existing task
def do_work(x): ...
# when you fan out
fut = do_work.with_options(
name=f"do_work-{i}",
result_storage=shared_fs,
).delay(i)
- Or set a default result storage block for your runtime via env var on both the flow runner and all workers:
PREFECT_RESULTS_DEFAULT_STORAGE_BLOCK=local-file-system/<your-block-name>
Where the LocalFileSystem block’s basepath is your shared mount (e.g. /task-storage
).
2) Use a remote filesystem (S3/GCS/Azure/SMB)
- Create a RemoteFileSystem/S3/GCS/Azure block and reference it the same way as above (either via result_storage=...
on the task or via the default storage block environment variable). This avoids the need for a shared local volume.
Important notes
- Make sure you’re setting result storage on the task(s). In Prefect 3.x, setting a storage block on the flow does not automatically apply to tasks’ result storage.
- Background task workers also use a small “task scheduling” storage to coordinate parameters. If you customize that, it must be shared as well. If you see a similar “parameters” or “scheduling” path error, it’s the same root cause.
- Known behavior: if result read fails, the file-not-found error can obscure the original exception. Once storage is shared properly, .result_async()
will surface the real task exception as expected.
Quick debugging checks
Run these in both a task worker and the flow process to confirm they’re using the same storage:
from prefect.results import get_default_result_storage
fs = get_default_result_storage()
print("Result storage:", fs)
And inside a flow after you submit:
state = await fut.get_state_async()
print(state.type, state.name, state.message) # This does not require reading the result file
If you need to avoid raising during gather while you fix storage:
states = await asyncio.gather(*(f.result_async(raise_on_failure=False) for f in futures), return_exceptions=True)
# Then inspect futures[i].state for messages/types
A couple of clarifying questions to make sure we aim at the right spot
- Where are your flow runs executed vs. your task workers (same machine, Docker Compose, Kubernetes, different nodes)?
- What exact basepath did you put on your LocalFileSystem block, and is that path mounted and identical across all containers/hosts?
- If you prefer remote storage, do you have an S3/GCS/Azure bucket you can point at instead?Marvin
09/18/2025, 1:26 PMSyméon del Marmol
09/18/2025, 2:38 PMEncountered exception during execution: ValueError('Path /task-storage/<id> does not exist.')
errors when a task started as background task inside a flow fails. I'd expect to see the actual exception being propagated to the parent run.Nate
09/18/2025, 2:39 PM- Your path points to a local folder:this seems likely. If your flow run and task workers are in different containers/hosts, that local path must be a shared volume or network drive mounted at the exact same path on all processes. If it isn’t, the worker writes the file but the flow process can’t see it — hence the “does not exist” error./task-storage/results/<id>
Nate
09/18/2025, 2:39 PMSyméon del Marmol
09/18/2025, 2:42 PM- PREFECT_RESULTS_PERSIST_BY_DEFAULT=true
with this to specify where to go: - PREFECT_LOCAL_STORAGE_PATH=/task-storage
Then after Marvin's suggestions, I also created blocks for both result and scheduling:
- PREFECT_TASKS_SCHEDULING_DEFAULT_STORAGE_BLOCK=local-file-system/shared-task-scheduling
- PREFECT_RESULTS_DEFAULT_STORAGE_BLOCK=local-file-system/shared-results
But nothing changed the resultNate
09/18/2025, 2:43 PM'Path /task-storage/<id> does not exist.'
well this most commonly happens when people run flows on containers but have result storage referring to a "local" filesystem that doesn't exist on the containerNate
09/18/2025, 2:43 PMSyméon del Marmol
09/18/2025, 2:43 PMSyméon del Marmol
09/18/2025, 2:46 PM