<@ULVA73B9P> why are my settings in prefect config...
# ask-marvin
a
@Marvin why are my settings in prefect config not saved across different machines authed into the same workspace? my colleague cannot see the same settings as me. for prefect 2.x
m
thought for 172.0 seconds
n
settings live in
~/.prefect
and are not synced across machines, they're client side settings specific to the machine with prefect installed
you can however commit
prefect.toml
to VCS to share settings

https://www.youtube.com/watch?v=aeWCT_fqdEY

m
Short answer: in Prefect 2.x,
prefect config
is local, profile-based, and not synced to your Prefect Cloud workspace. Your colleague’s machine has its own local profile, so they won’t see your settings unless you explicitly share them. What’s going on - Local-only: Config lives in your profiles file and environment, not in the workspace. - Shared via Cloud: Blocks and Variables are workspace-level and visible to everyone in the workspace. Local config is not. Where your config lives - Default path:
~/.prefect/profiles.toml
(i.e.,
$PREFECT_HOME/profiles.toml
) - Override path:
$PREFECT_PROFILES_PATH
if you set it Handy commands
Copy code
# See current values and where they come from
prefect config view --show-sources

# List and switch profiles
prefect profile ls
prefect profile use <profile-name>

# Set values in the active profile
prefect config set PREFECT_LOGGING_LEVEL=DEBUG PREFECT_API_URL=<https://api.prefect.cloud/api>
Ways to share settings across machines - Copy the profile file: - Share your
~/.prefect/profiles.toml
(or just the relevant profile section) with your teammate and place it in the same path on their machine. Don’t include secrets. - Script the setup: - Check the specific keys you’ve set with
prefect config view
, then have your teammate run
prefect config set ...
with the same values. - Centralize with a repo path: - Put a
profiles.toml
in a shared repo and point machines at it with
PREFECT_PROFILES_PATH=/path/to/shared/profiles.toml
. - Prefer workspace-native sharing where possible: - Use Blocks for credentials/connections and Variables for non-sensitive constants—both are shared in the workspace and don’t rely on local profiles. - Blocks: Blocks (2.x) - Variables: Variables (2.x) Docs for reference - Settings and profiles: Settings &amp; Profiles (2.x) - Workspaces overview: Workspaces (2.x) If you tell me which specific settings you’re trying to share, I’ll give you the exact commands or the right Block/Variable setup. I promise not to look too thrilled about it.
n
yep
a
@Nate assuming
prefect.toml
is prefect 3.x only?
n
oh, yep. sorry. you'd have to probably share your
~/.prefect/profiles.toml
with your colleague but in general i wouldn't recommend committing those because they often contain sensitive values
a
yea sharing the file in
~/.prefect/
is what we were doing in the past. I guess some motivation to move to prefect 3.x soon 😄
@Nate or @Marvin how do I enable flow result persistence in cloud? I’m guessing it needs to happen in code
m
thought for 111.5 seconds
n
dont know what marvin will say but easiest way is the env var (you can set on work pool etc) like
PREFECT_RESULTS_PERSIST_BY_DEFAULT
or something, you prob want s3 or gcs to back it esp if you have containerized runtimes (since you don't want to write results to disk on the ephemeral container because it will die and the disk will go away)
🙌 1
m
Yes — in Prefect 2.x you enable result persistence in code (or via a profile setting), and you must point it at remote storage. Cloud doesn’t store your results; it stores metadata and a reference to where you put them. What to do 1) Create a filesystem block that points at your remote storage (S3/GCS/Azure). Example with a generic RemoteFileSystem block:
Copy code
# one-time setup (run once to create the block)
from prefect.filesystems import RemoteFileSystem

RemoteFileSystem(
    basepath="<s3://my-bucket/prefix>",            # or gcs://, az://, etc. via fsspec
    settings={"profile": "my-aws-profile"}       # or rely on env vars/instance roles
).save("results-remote", overwrite=True)
2) Use that block in your flow (flow-level settings apply to tasks unless overridden):
Copy code
from prefect import flow, task
from prefect.filesystems import RemoteFileSystem
from prefect.serializers import JSONSerializer

fs = RemoteFileSystem.load("results-remote")

@flow(persist_result=True, result_storage=fs, result_serializer=JSONSerializer())
def my_flow():
    @task
    def compute():
        return {"answer": 42}
    return compute()
Notes and options - You can set
persist_result
,
result_storage
, and
result_serializer
on tasks as well. Task-level values override the flow’s. - If you want to avoid re-fetching from storage within the same process, set
cache_result_in_memory=True
on tasks. - Prefer provider-specific blocks if you’re already using integrations: - S3: prefect-aws
S3Bucket
- GCS: prefect-gcp
GcsBucket
- Azure: prefect-azure
AzureBlobStorage
Create the block in the UI (Blocks > + Create) or via Python, then pass it as
result_storage
. Global toggle (optional) - You can default everything to persist without changing decorators by setting in your profile or env:
Copy code
export PREFECT_RESULTS_PERSIST_BY_DEFAULT=true
You’ll still need to provide
result_storage
(e.g., at the flow level) so results land in remote storage instead of local. What you’ll see in Cloud - Cloud shows that results were persisted and where; it does not copy/store the payload. You read results by letting Prefect fetch them when needed (e.g., for retries/caching) or by accessing your storage directly. Docs (2.x) - Results: Results (2.x) - Filesystems/Storage: Filesystems (2.x) - Blocks: Blocks (2.x) If you tell me your storage of choice (S3, GCS, Azure), I’ll give you the exact block and snippet. I can hardly contain my excitement.
a
ok yea that makes sense been a while since I was messing around with Prefect configs. thanks @Nate
@Marvin how do I find the block slug in prefect 2.x
m
thought for 141.3 seconds
Do you mean: - the block type slug (e.g.,
s3-bucket
), or - the block document slug (e.g.,
s3-bucket/prod-results
)? Ways to find them in Prefect 2.x: UI - Cloud > Blocks > open your block. The details page shows a “Slug” field like
<block-type-slug>/<block-name>
. In code - Block type slug (from the class):
Copy code
from prefect.filesystems import RemoteFileSystem
RemoteFileSystem.get_block_type_slug()  # e.g., "remote-file-system"
- Block document slug (from name + type via the client):
Copy code
from prefect.client.orchestration import get_client
import asyncio

async def show_slug():
    async with get_client() as client:
        bd = await client.read_block_document_by_name(
            name="results-remote",
            block_type_slug="remote-file-system",
        )
        print(bd.slug)  # "remote-file-system/results-remote"

asyncio.run(show_slug())
Manual rule of thumb - Block document slug =
<block-type-slug>/<block-name>
Docs (2.x) - Blocks overview: Blocks (2.x) If you tell me the block class or name you’re using, I’ll spit out its exact slug with less suspense.
a
@Marvin how do I use
PREFECT_DEFAULT_RESULT_STORAGE_BLOCK
in prefect.yaml to point to a s3 bucket
m
thought for 105.2 seconds
n
Copy code
PREFECT_DEFAULT_RESULT_STORAGE_BLOCK=s3/my-block-name
m
Yes—you can set a workspace-wide default in 2.x via
PREFECT_DEFAULT_RESULT_STORAGE_BLOCK
, and you’ll point it at an S3Bucket block by its slug. Quick recipe 1) Install the AWS collection wherever your flows/workers run:
Copy code
pip install "prefect-aws>=0.3"
2) Create the S3 block (UI or Python). Example in Python:
Copy code
from prefect_aws.credentials import AwsCredentials
from prefect_aws.s3 import S3Bucket

# Optional: save credentials block (or rely on env/instance role)
AwsCredentials().save("default-aws", overwrite=True)

S3Bucket(
    bucket="my-results-bucket",
    basepath="prefect-results/",  # optional prefix
    aws_credentials=AwsCredentials.load("default-aws"),
).save("prod-results", overwrite=True)
The resulting block document slug will be
s3-bucket/prod-results
. 3) Add to your prefect.yaml (under a profile’s env):
Copy code
version: 2.0
profiles:
  default:
    env:
      PREFECT_DEFAULT_RESULT_STORAGE_BLOCK: "s3-bucket/prod-results"
4) Ensure results are actually persisted - Set
persist_result=True
on your flows/tasks (or wherever you want persistence). Example:
Copy code
from prefect import flow, task

@task(persist_result=True)
def compute():
    return 42

@flow
def my_flow():
    return compute()

if __name__ == "__main__":
    my_flow()
With the default storage block set, Prefect will write results to that S3 block unless you override
result_storage
in code. Notes - The value must be the block document slug:
<block-type-slug>/<block-name>
(e.g.,
s3-bucket/prod-results
). - Your workers must have
prefect-aws
installed so they can load and use the S3 block at runtime. - You can copy the slug from the block’s page in the UI. Docs - Results: Results (2.x) - Blocks: Blocks (2.x) - Prefect AWS S3 block docs: prefect-aws collection
n
oh yea marvin's right, the block type slug would be
s3-bucket/{block_name}
a
@Marvin and maybe @Nate - I use this notation and also set
PREFECT_RESULTS_PERSIST_BY_DEFAULT
to true in prefect.yaml, for some reason in the flow run it’s trying to use a different s3 block 🤷
m
thought for 51.9 seconds
n
i mean well how are you setting it? not sure its gonna come up with any useful information without more info
m
That should work, so if it’s picking a different S3 block something is overriding your setting at runtime. Quick checklist to find the culprit 1) Confirm what the worker actually sees - On the machine/container where the flow runs:
Copy code
prefect config view --show-sources | grep -E 'PREFECT_DEFAULT_RESULT_STORAGE_BLOCK|PREFECT_RESULTS_PERSIST_BY_DEFAULT|PREFECT_PROFILE'
env | grep PREFECT_DEFAULT_RESULT_STORAGE_BLOCK
Environment variables override profile config; code overrides both. 2) Log what the flow sees at runtime
Copy code
from prefect import flow, get_run_logger
from prefect.settings import (
    PREFECT_DEFAULT_RESULT_STORAGE_BLOCK,
    PREFECT_RESULTS_PERSIST_BY_DEFAULT,
)

@flow
def debug_settings():
    log = get_run_logger()
    <http://log.info|log.info>(f"default result storage block: {PREFECT_DEFAULT_RESULT_STORAGE_BLOCK.value()}")
    <http://log.info|log.info>(f"persist by default: {PREFECT_RESULTS_PERSIST_BY_DEFAULT.value()}")

debug_settings()
3) Check for code-level overrides - If you set
result_storage=
on the
@flow
or any
@task
, that wins over the default. - Task-level settings override flow-level. 4) Make sure you’re not mixing “code storage” vs “result storage” - Deployment
storage
(e.g., S3) is for your flow code/artifacts. Logs about “downloading flow” use that block. - Result persistence logs say things like “Persisting result to …” and should use
PREFECT_DEFAULT_RESULT_STORAGE_BLOCK
. 5) Verify the slug and workspace - The setting must be the block document slug:
s3-bucket/<block-name>
(e.g.,
s3-bucket/prod-results
). - Ensure that block exists in the same workspace as the run. 6) Version and deps - Prefect 2.x on all machines should be recent enough to support these settings. - Workers need prefect-aws installed to load
S3Bucket
. Canonical prefect.yaml snippet
Copy code
version: 2.0
profiles:
  default:
    env:
      PREFECT_DEFAULT_RESULT_STORAGE_BLOCK: "s3-bucket/prod-results"
      PREFECT_RESULTS_PERSIST_BY_DEFAULT: "true"
Force it in code (to rule out config issues)
Copy code
from prefect import flow, task
from prefect_aws.s3 import S3Bucket

@task(persist_result=True)
def compute():
    return 42

@flow(result_storage=S3Bucket.load("prod-results"))
def my_flow():
    return compute()

my_flow()
If it’s still grabbing the “other” block, please share: - The exact slug you set in
prefect.yaml
- A couple of log lines showing which block is being used - Where the flow is running (worker environment image/profile) Docs for reference - Results: Results (2.x) - Settings: Settings &amp; Profiles (2.x) - Blocks: Blocks (2.x) We’ll find the offender. It’s always the one you least want to change.
a
@Nate it’s hard to word it for marvin but basically I have
PREFECT_DEFAULT_RESULT_STORAGE_BLOCK: 's3-bucket/prefect-results'
but in the flow run it’s failing on
Copy code
botocore.exceptions.ParamValidationError: Parameter validation failed:
Invalid bucket name
on a different s3 bucket block I have
n
are you setting that in multiple places though? id be curious to see
get_current_settings().results
as printed from the flow
like im wondering if you're overriding your default result storage by selecting it with an env var or in code elsewhere
a
@Nate this is the only place in my config I’m even calling a s3 block which is what is strange about the error
let me try logging what you suggested