Alex Ilyin
08/15/2025, 3:40 PMMarvin
08/15/2025, 3:40 PMNate
08/15/2025, 3:41 PM~/.prefect
and are not synced across machines, they're client side settings specific to the machine with prefect installedNate
08/15/2025, 3:41 PMprefect.toml
to VCS to share settingsMarvin
08/15/2025, 3:43 PMprefect config
is local, profile-based, and not synced to your Prefect Cloud workspace. Your colleague’s machine has its own local profile, so they won’t see your settings unless you explicitly share them.
What’s going on
- Local-only: Config lives in your profiles file and environment, not in the workspace.
- Shared via Cloud: Blocks and Variables are workspace-level and visible to everyone in the workspace. Local config is not.
Where your config lives
- Default path: ~/.prefect/profiles.toml
(i.e., $PREFECT_HOME/profiles.toml
)
- Override path: $PREFECT_PROFILES_PATH
if you set it
Handy commands
# See current values and where they come from
prefect config view --show-sources
# List and switch profiles
prefect profile ls
prefect profile use <profile-name>
# Set values in the active profile
prefect config set PREFECT_LOGGING_LEVEL=DEBUG PREFECT_API_URL=<https://api.prefect.cloud/api>
Ways to share settings across machines
- Copy the profile file:
- Share your ~/.prefect/profiles.toml
(or just the relevant profile section) with your teammate and place it in the same path on their machine. Don’t include secrets.
- Script the setup:
- Check the specific keys you’ve set with prefect config view
, then have your teammate run prefect config set ...
with the same values.
- Centralize with a repo path:
- Put a profiles.toml
in a shared repo and point machines at it with PREFECT_PROFILES_PATH=/path/to/shared/profiles.toml
.
- Prefer workspace-native sharing where possible:
- Use Blocks for credentials/connections and Variables for non-sensitive constants—both are shared in the workspace and don’t rely on local profiles.
- Blocks: Blocks (2.x)
- Variables: Variables (2.x)
Docs for reference
- Settings and profiles: Settings & Profiles (2.x)
- Workspaces overview: Workspaces (2.x)
If you tell me which specific settings you’re trying to share, I’ll give you the exact commands or the right Block/Variable setup. I promise not to look too thrilled about it.Nate
08/15/2025, 3:50 PMAlex Ilyin
08/15/2025, 3:51 PMprefect.toml
is prefect 3.x only?Nate
08/15/2025, 3:53 PM~/.prefect/profiles.toml
with your colleague but in general i wouldn't recommend committing those because they often contain sensitive valuesAlex Ilyin
08/15/2025, 3:59 PM~/.prefect/
is what we were doing in the past. I guess some motivation to move to prefect 3.x soon 😄Alex Ilyin
08/15/2025, 4:00 PMMarvin
08/15/2025, 4:00 PMNate
08/15/2025, 4:02 PMPREFECT_RESULTS_PERSIST_BY_DEFAULT
or something, you prob want s3 or gcs to back it esp if you have containerized runtimes (since you don't want to write results to disk on the ephemeral container because it will die and the disk will go away)Marvin
08/15/2025, 4:02 PM# one-time setup (run once to create the block)
from prefect.filesystems import RemoteFileSystem
RemoteFileSystem(
basepath="<s3://my-bucket/prefix>", # or gcs://, az://, etc. via fsspec
settings={"profile": "my-aws-profile"} # or rely on env vars/instance roles
).save("results-remote", overwrite=True)
2) Use that block in your flow (flow-level settings apply to tasks unless overridden):
from prefect import flow, task
from prefect.filesystems import RemoteFileSystem
from prefect.serializers import JSONSerializer
fs = RemoteFileSystem.load("results-remote")
@flow(persist_result=True, result_storage=fs, result_serializer=JSONSerializer())
def my_flow():
@task
def compute():
return {"answer": 42}
return compute()
Notes and options
- You can set persist_result
, result_storage
, and result_serializer
on tasks as well. Task-level values override the flow’s.
- If you want to avoid re-fetching from storage within the same process, set cache_result_in_memory=True
on tasks.
- Prefer provider-specific blocks if you’re already using integrations:
- S3: prefect-aws S3Bucket
- GCS: prefect-gcp GcsBucket
- Azure: prefect-azure AzureBlobStorage
Create the block in the UI (Blocks > + Create) or via Python, then pass it as result_storage
.
Global toggle (optional)
- You can default everything to persist without changing decorators by setting in your profile or env:
export PREFECT_RESULTS_PERSIST_BY_DEFAULT=true
You’ll still need to provide result_storage
(e.g., at the flow level) so results land in remote storage instead of local.
What you’ll see in Cloud
- Cloud shows that results were persisted and where; it does not copy/store the payload. You read results by letting Prefect fetch them when needed (e.g., for retries/caching) or by accessing your storage directly.
Docs (2.x)
- Results: Results (2.x)
- Filesystems/Storage: Filesystems (2.x)
- Blocks: Blocks (2.x)
If you tell me your storage of choice (S3, GCS, Azure), I’ll give you the exact block and snippet. I can hardly contain my excitement.Alex Ilyin
08/15/2025, 4:19 PMAlex Ilyin
08/15/2025, 6:47 PMMarvin
08/15/2025, 6:47 PMMarvin
08/15/2025, 6:49 PMs3-bucket
), or
- the block document slug (e.g., s3-bucket/prod-results
)?
Ways to find them in Prefect 2.x:
UI
- Cloud > Blocks > open your block. The details page shows a “Slug” field like <block-type-slug>/<block-name>
.
In code
- Block type slug (from the class):
from prefect.filesystems import RemoteFileSystem
RemoteFileSystem.get_block_type_slug() # e.g., "remote-file-system"
- Block document slug (from name + type via the client):
from prefect.client.orchestration import get_client
import asyncio
async def show_slug():
async with get_client() as client:
bd = await client.read_block_document_by_name(
name="results-remote",
block_type_slug="remote-file-system",
)
print(bd.slug) # "remote-file-system/results-remote"
asyncio.run(show_slug())
Manual rule of thumb
- Block document slug = <block-type-slug>/<block-name>
Docs (2.x)
- Blocks overview: Blocks (2.x)
If you tell me the block class or name you’re using, I’ll spit out its exact slug with less suspense.Alex Ilyin
08/15/2025, 7:31 PMPREFECT_DEFAULT_RESULT_STORAGE_BLOCK
in prefect.yaml to point to a s3 bucketMarvin
08/15/2025, 7:31 PMNate
08/15/2025, 7:32 PMPREFECT_DEFAULT_RESULT_STORAGE_BLOCK=s3/my-block-name
Marvin
08/15/2025, 7:33 PMPREFECT_DEFAULT_RESULT_STORAGE_BLOCK
, and you’ll point it at an S3Bucket block by its slug.
Quick recipe
1) Install the AWS collection wherever your flows/workers run:
pip install "prefect-aws>=0.3"
2) Create the S3 block (UI or Python). Example in Python:
from prefect_aws.credentials import AwsCredentials
from prefect_aws.s3 import S3Bucket
# Optional: save credentials block (or rely on env/instance role)
AwsCredentials().save("default-aws", overwrite=True)
S3Bucket(
bucket="my-results-bucket",
basepath="prefect-results/", # optional prefix
aws_credentials=AwsCredentials.load("default-aws"),
).save("prod-results", overwrite=True)
The resulting block document slug will be s3-bucket/prod-results
.
3) Add to your prefect.yaml (under a profile’s env):
version: 2.0
profiles:
default:
env:
PREFECT_DEFAULT_RESULT_STORAGE_BLOCK: "s3-bucket/prod-results"
4) Ensure results are actually persisted
- Set persist_result=True
on your flows/tasks (or wherever you want persistence). Example:
from prefect import flow, task
@task(persist_result=True)
def compute():
return 42
@flow
def my_flow():
return compute()
if __name__ == "__main__":
my_flow()
With the default storage block set, Prefect will write results to that S3 block unless you override result_storage
in code.
Notes
- The value must be the block document slug: <block-type-slug>/<block-name>
(e.g., s3-bucket/prod-results
).
- Your workers must have prefect-aws
installed so they can load and use the S3 block at runtime.
- You can copy the slug from the block’s page in the UI.
Docs
- Results: Results (2.x)
- Blocks: Blocks (2.x)
- Prefect AWS S3 block docs: prefect-aws collectionNate
08/15/2025, 7:33 PMs3-bucket/{block_name}
Alex Ilyin
08/15/2025, 7:34 PMPREFECT_RESULTS_PERSIST_BY_DEFAULT
to true in prefect.yaml, for some reason in the flow run it’s trying to use a different s3 block 🤷Marvin
08/15/2025, 7:34 PMNate
08/15/2025, 7:35 PMMarvin
08/15/2025, 7:35 PMprefect config view --show-sources | grep -E 'PREFECT_DEFAULT_RESULT_STORAGE_BLOCK|PREFECT_RESULTS_PERSIST_BY_DEFAULT|PREFECT_PROFILE'
env | grep PREFECT_DEFAULT_RESULT_STORAGE_BLOCK
Environment variables override profile config; code overrides both.
2) Log what the flow sees at runtime
from prefect import flow, get_run_logger
from prefect.settings import (
PREFECT_DEFAULT_RESULT_STORAGE_BLOCK,
PREFECT_RESULTS_PERSIST_BY_DEFAULT,
)
@flow
def debug_settings():
log = get_run_logger()
<http://log.info|log.info>(f"default result storage block: {PREFECT_DEFAULT_RESULT_STORAGE_BLOCK.value()}")
<http://log.info|log.info>(f"persist by default: {PREFECT_RESULTS_PERSIST_BY_DEFAULT.value()}")
debug_settings()
3) Check for code-level overrides
- If you set result_storage=
on the @flow
or any @task
, that wins over the default.
- Task-level settings override flow-level.
4) Make sure you’re not mixing “code storage” vs “result storage”
- Deployment storage
(e.g., S3) is for your flow code/artifacts. Logs about “downloading flow” use that block.
- Result persistence logs say things like “Persisting result to …” and should use PREFECT_DEFAULT_RESULT_STORAGE_BLOCK
.
5) Verify the slug and workspace
- The setting must be the block document slug: s3-bucket/<block-name>
(e.g., s3-bucket/prod-results
).
- Ensure that block exists in the same workspace as the run.
6) Version and deps
- Prefect 2.x on all machines should be recent enough to support these settings.
- Workers need prefect-aws installed to load S3Bucket
.
Canonical prefect.yaml snippet
version: 2.0
profiles:
default:
env:
PREFECT_DEFAULT_RESULT_STORAGE_BLOCK: "s3-bucket/prod-results"
PREFECT_RESULTS_PERSIST_BY_DEFAULT: "true"
Force it in code (to rule out config issues)
from prefect import flow, task
from prefect_aws.s3 import S3Bucket
@task(persist_result=True)
def compute():
return 42
@flow(result_storage=S3Bucket.load("prod-results"))
def my_flow():
return compute()
my_flow()
If it’s still grabbing the “other” block, please share:
- The exact slug you set in prefect.yaml
- A couple of log lines showing which block is being used
- Where the flow is running (worker environment image/profile)
Docs for reference
- Results: Results (2.x)
- Settings: Settings & Profiles (2.x)
- Blocks: Blocks (2.x)
We’ll find the offender. It’s always the one you least want to change.Alex Ilyin
08/15/2025, 7:37 PMPREFECT_DEFAULT_RESULT_STORAGE_BLOCK: 's3-bucket/prefect-results'
but in the flow run it’s failing on
botocore.exceptions.ParamValidationError: Parameter validation failed:
Invalid bucket name
on a different s3 bucket block I haveNate
08/15/2025, 7:38 PMget_current_settings().results
as printed from the flowNate
08/15/2025, 7:39 PMAlex Ilyin
08/15/2025, 7:39 PMAlex Ilyin
08/15/2025, 7:40 PM