<@ULVA73B9P> I get this error: ```prefect.excepti...
# ask-marvin
g
@Marvin I get this error:
Copy code
prefect.exceptions.PrefectHTTPStatusError: Client error '404 Not Found' for url '<http://prefect:4200/api/block_types/slug/S3Bucket/block_documents/name/minio-bucket?include_secrets=true>'
worker-1  | Response: {'detail': 'Block document not found'}
When I run my containerised process worker. I have
PREFECT_DEFAULT_RESULT_STORAGE_BLOCK="S3Bucket/minio-bucket"
specified in the env of the worker. I have created the blocks in the prefect UI. Why in tarnation isn't this working
m
thought for 417.8 seconds
Short answer: your worker is looking for a block that doesn’t exist under that slug. Two likely culprits: 1) Wrong env var name 2) Wrong block slug format What to fix - Use the correct env var name: -
PREFECT_RESULTS_DEFAULT_STORAGE_BLOCK
(not
PREFECT_DEFAULT_RESULT_STORAGE_BLOCK
) - Use the block type slug (kebab-case), not the class name: -
s3-bucket/minio-bucket
(not
S3Bucket/minio-bucket
) Why you’re seeing 404 Prefect resolves the value as
<block_type_slug>/<block_name>
. Your worker is calling:
/api/block_types/slug/S3Bucket/block_documents/name/minio-bucket
There is no block type with slug `S3Bucket`; it should be
s3-bucket
. Hence “Block document not found.” Quick checks (run these inside the worker container) - Verify your block exists under the expected slug:
Copy code
prefect block ls
prefect block inspect s3-bucket/minio-bucket
- Verify the worker is pointed at the same API you used to create the block:
Copy code
prefect config view --hide-defaults
Confirm
PREFECT_API_URL
matches the UI where you created the block. If you created it in Cloud but the worker points at
<http://prefect:4200/api>
, they’re different universes. If you’re using MinIO, that’s fine—S3Bucket works with an S3-compatible endpoint. Just ensure: - The
prefect-aws
collection is installed in the worker image. - The S3 block was created in the same workspace the worker is using. Docs, in case you enjoy light reading: - Blocks overview: Blocks - Results storage config: Results - S3 bucket block (prefect-aws): S3Bucket If this still sulks after the change, share: -
prefect config view --hide-defaults
output from the worker container -
prefect block inspect s3-bucket/minio-bucket
output We’ll make it behave, even if it doesn’t want to.
g
@Marvin Do I have to do anything on the machine that is reading these results? Keep in mind that I have a prefect server, the worker I described above and then a 3rd container that runs remote deployments and fetches the results to do something with them
m
thought for 108.1 seconds
thought for 44.8 seconds
Short version: the machine that reads results needs to be able to talk to the Prefect API and your MinIO, and it needs the S3 block type installed. You don’t need to recreate the block locally or set a default storage block to read existing results. Checklist for the “reader” container - Install the block’s integration -
pip install prefect-aws
(the S3Bucket block lives there) - Point it at the same Prefect API/workspace - Set
PREFECT_API_URL
(and
PREFECT_API_KEY
if using Cloud) - Sanity check with:
Copy code
prefect config view --hide-defaults
- Make sure it can reach your MinIO endpoint/port and DNS (whatever you set in the S3Bucket block) - RBAC (Cloud only): the API key must be allowed to read blocks including secrets. Prefect fetches the block document (with secrets) to load the results. A few clarifications - You do NOT need to set
PREFECT_RESULTS_DEFAULT_STORAGE_BLOCK
on the reader just to fetch results. That setting is only used when writing new results without an explicit storage specified. - You do NOT need to recreate the block in the reader’s environment. Prefect loads the exact block document referenced by the flow run via the API. - If you plan to read the objects directly from MinIO yourself (outside Prefect), then of course you’ll need credentials in that container as well. Useful commands to double-check things inside the reader container
Copy code
# Verify the block exists and you can read it from this environment
prefect block inspect s3-bucket/minio-bucket

# Confirm API target and credentials
prefect config view --hide-defaults
Docs, should you feel adventurous: - Results storage - S3Bucket block - Blocks overview If you hit any permission or resolution errors on that container, paste the output of the two commands above and we’ll sort it. Begrudgingly, but we will.
Short version: the reader container doesn’t need its own copy of the block. It needs to be able to fetch the block document from the Prefect API and then reach your MinIO/S3 endpoint to read the bytes. Thrilling, I know. Checklist for the “results reader” container: - Point it at the same API/workspace: - Set
PREFECT_API_URL
to the same value the worker/server use. - Sanity check:
prefect config view --hide-defaults
- Install the block’s package and deps: - For S3/MinIO, install
prefect-aws
(which pulls in boto3). The client needs the class to deserialize the block document. - Network reachability: - The MinIO/S3 endpoint set on the
S3Bucket
block must be reachable from the reader container (e.g.,
<http://minio:9000>
only works if they share a Docker network). - Permissions/secrets: - You do not need to set AWS creds on the reader if the S3 block holds them; Prefect fetches the block document (including secrets) from the API. Just make sure the reader can access the API and the block (same workspace). - Serializer compatibility: - Prefect will use the recorded serializer (default
pickle
) to load the result. If you used pickle, the reader must have any libraries/types required to unpickle the object and a compatible Python version. Use JSON if you want portability. - Persistence (often missed): - Setting a default storage block does not automatically persist results. You must enable persistence or there will be nothing to read: - Globally:
PREFECT_RESULTS_PERSIST_BY_DEFAULT=true
- Or per-flow/task:
@flow(persist_result=True)
/
@task(persist_result=True)
Quick tests to run in the reader container:
Copy code
prefect block inspect s3-bucket/minio-bucket
prefect config view --hide-defaults
If
block inspect
fails, the reader isn’t seeing the same API/workspace or the block slug is wrong. If it succeeds but reading still fails, it’s usually network access to MinIO or missing
prefect-aws
. Two clarifiers to get you unstuck faster: - How are you fetching results—using Prefect’s client (
get_state_result
) or reading S3 directly? - Are all three containers on the same Docker network, and does your S3 block’s
endpoint_url
use a hostname reachable from the reader? Docs, should you need a bedtime story: - Results: Results - S3 bucket block: S3Bucket
g
@Marvin ah crap. my prefect server, minio and the worker run with docker compose so minio endpoint will be "http://minio:9000" and this is how the s3 block credential is setup as well... but I am running the "other" API that reads the results in a normal, python process.... what do
m
thought for 75.1 seconds
Right now your S3 block’s
endpoint_url
is
<http://minio:9000>
, which only resolves inside your Docker network. Your “other” Python process runs on the host, so “minio” means nothing to it. Same likely applies to your Prefect API URL if it’s set to
<http://prefect:4200>
. You’ve got three sane options: Option 1 — Keep the block as-is; make host resolve the Docker names - Ensure MinIO (and Prefect) are published to the host: - In docker-compose, have ports like
- "9000:9000"
for MinIO and
- "4200:4200"
for Prefect. - On the host running the reader process, add entries to
/etc/hosts
(or your local DNS): -
127.0.0.1 minio
-
127.0.0.1 prefect
- Then the reader can use: -
PREFECT_API_URL=<http://prefect:4200/api>
- The existing S3 block with
endpoint_url=<http://minio:9000>
will now work from both places. Option 2 — Use a host-accessible endpoint in the block - Change the S3 block’s
endpoint_url
to something the host can reach, e.g.: -
<http://host.docker.internal:9000>
(Mac/Windows; on Linux add to each service using it:
extra_hosts: ["host.docker.internal:host-gateway"]
) - or your host’s LAN IP, e.g.
<http://192.168.x.y:9000>
- Make sure the worker containers can also reach that address. Option 3 — Put the reader in the same Docker network - Run the reader in a container attached to the compose network; then
minio
and
prefect
just work. Don’t forget these bits: - The reader needs
prefect-aws
installed to deserialize and use the S3 block. - Point the reader at the same Prefect API/workspace and ensure it can resolve that host as well. - If you used the default
pickle
serializer, the reader must have compatible code/versions to unpickle. - Results must actually be persisted: - Set
PREFECT_RESULTS_PERSIST_BY_DEFAULT=true
or use
@flow(persist_result=True)
. Quick sanity checks from the reader:
Copy code
prefect config view --hide-defaults
prefect block inspect s3-bucket/minio-bucket
Docs, for when you’ve finished wrestling your /etc/hosts: - Results: Results - Blocks: Blocks - S3Bucket: S3Bucket Pick one of the options above and you’ll be reading results without further melodrama.