<@ULVA73B9P> I’m trying to upgrade prefect2 to pre...
# ask-community
a
@Marvin I’m trying to upgrade prefect2 to prefect3, what’s the corresponding syntax for push deployment code to a s3 storage
m
thought for 406.6 seconds
Great question — and welcome to Prefect 3.x! In Prefect 3.x, you don’t use 2.x storage blocks (e.g., S3, RemoteFileSystem) or Deployment.build_from_flow. Instead, you point your deployment at a source location and Prefect will package your project and push it there. For S3, the “source” is just an s3:// URL. Here’s the 3.x equivalent for “push code to S3”: Python API (recommended for clarity)
Copy code
# pip install "prefect[aws]"  # ensures s3fs/boto3 are available

from prefect import Flow

# This will package your local project (honoring .prefectignore)
# and upload it under the given S3 prefix. Workers will pull and run from there.
Flow.from_source(
    source="<s3://my-bucket/prefect/my-project>",         # S3 bucket/prefix for your code package
    entrypoint="flows/my_flow.py:my_flow",               # path to your flow function
).deploy(
    name="prod",
    work_pool_name="your-work-pool",                     # e.g., process, docker, kubernetes, etc.
    # push=True is the default; set explicitly if you want
    push=True,
)
Alternative: pass an explicit RemoteFileSystem
Copy code
from prefect import Flow
from prefect.filesystems import RemoteFileSystem

# Prefer IAM roles or environment-based credentials; this shows explicit settings for completeness.
fs = RemoteFileSystem(
    basepath="<s3://my-bucket/prefect/my-project>",
    # Typically unnecessary if you’re using IAM roles, ~/.aws/credentials, or env vars.
    # settings={"key": "...", "secret": "...", "client_kwargs": {"region_name": "us-east-1"}},
)

Flow.from_source(
    source=fs,
    entrypoint="flows/my_flow.py:my_flow",
).deploy(
    name="prod",
    work_pool_name="your-work-pool",
    push=True,
)
Notes and gotchas - Removed in 3.x: do NOT use
Deployment.build_from_flow()
or
prefect deployment build
. Use
Flow.from_source(...).deploy(...)
or
prefect deploy
. - Packaging: Prefect bundles your project directory (honoring
.prefectignore
) and pushes a versioned artifact to S3 under the prefix you provide. - Credentials: Ensure your worker can read from S3 via one of: - IAM role on the worker - AWS env vars (
AWS_ACCESS_KEY_ID
,
AWS_SECRET_ACCESS_KEY
,
AWS_DEFAULT_REGION
) - AWS profile (
AWS_PROFILE
) - Dependencies: Install S3 support -
pip install "prefect[aws]"
(recommended) or ensure
s3fs
and
boto3
are available CLI workflow - The CLI (
prefect deploy
) reads project config from your project and deploys. The simplest path for S3 in 3.x is to call the Python API above (e.g., a small
deploy.py
) — it’s the most explicit and avoids guessing flags. - If you prefer strictly-CLI, you can still run the same Python call via one-liner:
Copy code
uv run --with 'prefect[aws]' python -c "from prefect import Flow; Flow.from_source('<s3://my-bucket/prefect/my-project>','flows/my_flow.py:my_flow').deploy(name='prod', work_pool_name='your-work-pool')"
If you can share: - your worker type (process, docker, kubernetes), - how you’ve been authenticating to AWS in 2.x, I can tailor the S3 path/credentials examples and a .prefectignore suited to your repo.
a
@Marvin I need a process worker and authenticating to AWS using access token env variables
m
thought for 41.3 seconds
Perfect — here’s a concise, working setup for Prefect 3.x with: - a process worker, and - S3 code storage authenticated via AWS env vars Assumptions: - You’ll use AWS env vars (access key/secret, optional session token) on both the machine that runs deploy and the worker machine - You’ve got the prefect[aws] extra installed so S3 is available 1) Install dependencies
Copy code
pip install "prefect[aws]"
2) Set AWS env vars (on both deploy and worker machines)
Copy code
export AWS_ACCESS_KEY_ID=YOUR_KEY
export AWS_SECRET_ACCESS_KEY=YOUR_SECRET
# Only if using temporary creds:
export AWS_SESSION_TOKEN=YOUR_SESSION_TOKEN
export AWS_DEFAULT_REGION=us-east-1
3) Create a process work pool
Copy code
prefect work-pool create process-pool --type process
4) Example flow and deployment flows/my_flow.py
Copy code
from prefect import flow, get_run_logger

@flow
def my_flow():
    log = get_run_logger()
    <http://log.info|log.info>("Hello from S3-backed deployment!")

if __name__ == "__main__":
    my_flow()
deploy.py
Copy code
from prefect import Flow

# This packages your local project (honoring .prefectignore) and pushes it to S3
Flow.from_source(
    source="<s3://my-bucket/prefect/my-project>",   # bucket/prefix to store your code
    entrypoint="flows/my_flow.py:my_flow",         # path to your flow function
).deploy(
    name="prod",
    work_pool_name="process-pool",
    push=True,                                     # ensures your code is uploaded
)
Optional .prefectignore (to keep bundles small)
Copy code
.venv
__pycache__/
*.pyc
.git/
.env
*.ipynb
data/
Run the deploy
Copy code
python deploy.py
# or ensure the AWS/S3 extra is present:
uv run --with 'prefect[aws]' python deploy.py
5) Start a process worker - Make sure the worker environment also has the AWS env vars and prefect[aws] installed so it can pull from S3.
Copy code
prefect worker start --pool process-pool
# or:
uv run --with 'prefect[aws]' prefect worker start --pool process-pool
What happens under the hood -
Flow.from_source(..., source='s3://...')
tells Prefect to package your project and upload it to that S3 prefix; Prefect creates a versioned subdirectory for each deploy. - The process worker downloads and executes that bundle when runs are submitted. If you share your bucket name/prefix and desired region, I can tailor the code paths and a hardened .prefectignore for your repo.
a
@Marvin received error cannot find correct path for flow code No such file or directory: ’/var/folders/3l/yrzn978512xc5wkh4pv56knm0000gp/T/tmpgo8ffkim/
m
thought for 42.5 seconds
Thanks for the details. That error means the worker downloaded your packaged project into a temp dir, then couldn’t find the entrypoint file inside the bundle. This usually comes down to the entrypoint path or what was packaged. Quick checks and fixes 1) Ensure the entrypoint is correct relative to the project root you deployed from - Example repo:
Copy code
.
  ├─ flows/
  │  └─ my_flow.py   # defines @flow def my_flow()
  └─ deploy.py
Then your entrypoint must be:
Copy code
entrypoint="flows/my_flow.py:my_flow"
- If your code lives under src/, the entrypoint should include that:
Copy code
entrypoint="src/flows/my_flow.py:my_flow"
2) Run deploy from the project root - Prefect packages the current working directory by default. If you ran
python deploy.py
from a subfolder, your entrypoint path and packaged layout won’t match. - Re-run from the repo root:
Copy code
uv run --with 'prefect[aws]' python deploy.py
3) Make sure .prefectignore didn’t exclude your flow file or its parent directories - Keep it minimal until things work:
Copy code
.venv
  __pycache__/
  *.pyc
  .git/
  .env
  data/
- Avoid patterns like
flows/
or
*.py
that would remove your entrypoint. 4) Inspect the deployment to confirm what Prefect stored - This shows the entrypoint and storage it will use:
Copy code
prefect deployment inspect "<flow_name>/<deployment_name>"
Verify the “entrypoint” matches the path you expect inside the package. 5) Re-deploy after corrections - Example:
Copy code
from prefect import Flow

  Flow.from_source(
      source="<s3://my-bucket/prefect/my-project>",
      entrypoint="flows/my_flow.py:my_flow",
  ).deploy(
      name="prod",
      work_pool_name="process-pool",
      push=True,
  )
6) Worker environment - Since you’re using a process worker, ensure on the worker host:
Copy code
pip install "prefect[aws]"
  export AWS_ACCESS_KEY_ID=...
  export AWS_SECRET_ACCESS_KEY=...
  export AWS_SESSION_TOKEN=...   # if using temporary creds
  export AWS_DEFAULT_REGION=us-east-1
  prefect worker start --pool process-pool
If it still fails, can you share: - Your repo tree (top-level only is fine) - The exact entrypoint string you used - Your .prefectignore - The output of
prefect deployment inspect "<flow_name>/<deployment_name>"
With that, I can pinpoint the mismatch and get you unblocked quickly.
a
@Marvin double checked, the “worker” copied nothing into the temp folder, why
m
thought for 51.8 seconds
Got it — if the temp dir is empty, the worker didn’t actually pull any project files. That usually means either nothing was uploaded during deploy or the worker can’t read from your S3 source. Here’s a focused checklist to pinpoint and fix it. Most common causes - Nothing was pushed to S3 at deploy time (missing aws deps or creds on the deploy machine) - The deployment is pointing at Local storage instead of S3 - The worker can’t read from S3 (no creds/region, missing prefect[aws], or wrong prefix) - .prefectignore excluded your files Verify your deployment references S3 - Check the deployment spec:
Copy code
prefect deployment inspect "<flow_name>/<deployment_name>"
Confirm you see storage that points at s3://my-bucket/prefix (not local). Verify S3 artifacts exist - On the machine where you ran deploy, confirm objects were created under your prefix:
Copy code
aws s3 ls <s3://my-bucket/prefect/my-project/> --recursive | head -50
You should see a set of files/folders created at deploy time. If you see nothing, the deploy didn’t push. Re-deploy with explicit S3 and logs - Ensure aws deps and creds on the deploy machine first:
Copy code
pip install "prefect[aws]"
export AWS_ACCESS_KEY_ID=...
export AWS_SECRET_ACCESS_KEY=...
export AWS_SESSION_TOKEN=...      # if using temporary creds
export AWS_DEFAULT_REGION=us-east-1
export PREFECT_LOGGING_LEVEL=DEBUG
- Minimal deploy script:
Copy code
from prefect import Flow

Flow.from_source(
    source="<s3://my-bucket/prefect/my-project>",
    entrypoint="flows/my_flow.py:my_flow",
).deploy(
    name="prod",
    work_pool_name="process-pool",
    push=True,
)
- Run it and look for logs like “Uploading project to s3://...” and counts of files uploaded. Verify the worker can read S3 - On the worker host:
Copy code
pip install "prefect[aws]"
export AWS_ACCESS_KEY_ID=...
export AWS_SECRET_ACCESS_KEY=...
export AWS_SESSION_TOKEN=...      # if needed
export AWS_DEFAULT_REGION=us-east-1
export PREFECT_LOGGING_LEVEL=DEBUG
aws s3 ls <s3://my-bucket/prefect/my-project/> --recursive | head -50
prefect worker start --pool process-pool
If the AWS CLI can list objects but the worker still pulls nothing, it’s usually missing the prefect[aws] extra or your deployment isn’t actually pointing to S3. Check .prefectignore - Make sure you didn’t exclude your flow files or their parent directories. Start minimal:
Copy code
.venv
__pycache__/
*.pyc
.git/
.env
data/
If it still doesn’t work, please share: - The output of
prefect deployment inspect "<flow>/<deployment>"
- A quick
aws s3 ls <s3://my-bucket/prefect/my-project/> --recursive | head -50
- The deploy logs (with PREFECT_LOGGING_LEVEL=DEBUG) - Your project tree (top-level) and the exact entrypoint string With that, I can pinpoint whether it’s an upload, storage config, or permissions issue and get you unblocked.
a
@Marvin inspect shows entrypoint point to local
m
thought for 18.8 seconds
Thanks — that explains it. If inspect shows the deployment’s entrypoint pointing to local, the deployment was created without a remote source, so the worker has nothing to download. That’s why the temp directory is empty. Why this happens - Using
my_flow.deploy(...)
on an in-memory flow defaults to Local storage - Or running
prefect deploy
without configuring a remote source in your project - Either way, the deployment references your local filesystem paths, which do not exist on the worker Fix: re-deploy with S3 as the source Use the Python API to explicitly set S3 as the source, then deploy. This updates the deployment to remote storage and pushes your code package to S3. deploy.py
Copy code
from prefect import Flow

Flow.from_source(
    source="<s3://my-bucket/prefect/my-project>",   # S3 bucket/prefix for your code package
    entrypoint="flows/my_flow.py:my_flow",         # relative to your project root
).deploy(
    name="prod",
    work_pool_name="process-pool",
    push=True,                                     # ensures your project is uploaded to S3
)
Run it (from your project root) with AWS creds present and the S3 extra installed:
Copy code
pip install "prefect[aws]"
export AWS_ACCESS_KEY_ID=...
export AWS_SECRET_ACCESS_KEY=...
export AWS_SESSION_TOKEN=...          # only if using temp creds
export AWS_DEFAULT_REGION=us-east-1
export PREFECT_LOGGING_LEVEL=DEBUG    # optional, helpful logs

python deploy.py
# or
uv run --with 'prefect[aws]' python deploy.py
Verify it updated
Copy code
prefect deployment inspect "<flow_name>/prod"
You should now see storage pointing at your s3:// prefix and a relative entrypoint (not a local absolute path). Start the process worker (with AWS creds and prefect[aws] on the worker host)
Copy code
pip install "prefect[aws]"
export AWS_ACCESS_KEY_ID=...
export AWS_SECRET_ACCESS_KEY=...
export AWS_SESSION_TOKEN=...          # if applicable
export AWS_DEFAULT_REGION=us-east-1
prefect worker start --pool process-pool
Optional cleanup - If you want to avoid confusion with the old local deployment, you can delete it:
Copy code
prefect deployment delete "<flow_name>/<old_deployment_name>"
If you’d like, share how you originally created the deployment (code or CLI command) and I’ll point to the minimal change needed.
a
@Marvin I don’t want to download I want to UPLOAD the code to s3
m
thought for 26.1 seconds
Totally—uploading happens at deploy time. To push your project code to S3 in Prefect 3.x, you must create the deployment using a remote source (your s3:// URL) and
push=True
. That upload step runs where you execute the deploy code (not on the worker). Do this from your project root: 1) Ensure S3 support and AWS env vars are set on the deploy machine
Copy code
pip install "prefect[aws]"
export AWS_ACCESS_KEY_ID=YOUR_KEY
export AWS_SECRET_ACCESS_KEY=YOUR_SECRET
# If temporary creds:
export AWS_SESSION_TOKEN=YOUR_SESSION_TOKEN
export AWS_DEFAULT_REGION=us-east-1
2) Minimal deploy script that UPLOADS your project to S3 deploy.py
Copy code
from prefect import Flow

Flow.from_source(
    source="<s3://my-bucket/prefect/my-project>",   # where to upload your packaged code
    entrypoint="flows/my_flow.py:my_flow",         # relative path to your flow in the repo
).deploy(
    name="prod",
    work_pool_name="process-pool",
    push=True,                                     # this triggers the upload to S3
)
3) Run the deploy (this is the upload step)
Copy code
uv run --with 'prefect[aws]' python deploy.py
# or: python deploy.py (if prefect[aws] already installed)
4) Verify it was uploaded
Copy code
aws s3 ls <s3://my-bucket/prefect/my-project/> --recursive | head -50
You should see a versioned folder with your packaged project files. Important notes - If
prefect deployment inspect "<flow>/<deployment>"
shows local paths, that deployment was created without a remote source. Re-run the deploy above to convert it to S3 and upload your code. - The worker will only download later at run time; it never performs the upload. Make sure your upload (deploy) runs on a machine with access to your source code and AWS credentials. - Keep
.prefectignore
from excluding your flow files so they’re included in the uploaded package. If you prefer a CLI one-liner for the same upload action:
Copy code
uv run --with 'prefect[aws]' python -c "from prefect import Flow; Flow.from_source('<s3://my-bucket/prefect/my-project>','flows/my_flow.py:my_flow').deploy(name='prod', work_pool_name='process-pool', push=True)"
If you share your bucket/prefix and project tree top-level, I can confirm the exact
entrypoint
and S3 path you should use.
a
@Marvin you are severely hallucinating, the documentation shows prefect3 have no ability to upload to s3
m
thought for 43.2 seconds
You’re right — and I’m sorry for the confusion earlier. In Prefect 3.x, deployments do not upload your code to S3 for you. A deployment only stores a reference to your source; if you want S3-based code storage, you must pre-populate S3 yourself, then point the deployment at that path so the worker can download at run time. Two supported patterns: A) Recommended for process workers: use Git as the source - No uploads needed; workers clone the repo at run time (use a token for private repos).
Copy code
from prefect import Flow

Flow.from_source(
    source="<https://github.com/your-org/your-repo>",
    entrypoint="flows/my_flow.py:my_flow",
).deploy(
    name="prod",
    work_pool_name="process-pool",
)
B) If you must use S3: upload your project to S3 yourself, then point the deployment at that S3 path 1) Sync your project to a versioned S3 prefix (from your repo root):
Copy code
# minimal excludes; adjust as needed
aws s3 sync . <s3://my-bucket/prefect/my-project/releases/$>(git rev-parse --short HEAD) \
  --exclude ".git/*" --exclude ".venv/*" --exclude "__pycache__/*" --exclude "*.pyc" --exclude ".env"
2) Create the deployment that references that uploaded directory:
Copy code
from prefect import Flow

Flow.from_source(
    source="<s3://my-bucket/prefect/my-project/releases/<your-commit-or-version>>",
    entrypoint="flows/my_flow.py:my_flow",
).deploy(
    name="prod",
    work_pool_name="process-pool",
)
3) Ensure your worker has: - prefect[aws] installed - AWS_ACCESS_KEY_ID / AWS_SECRET_ACCESS_KEY (/ AWS_SESSION_TOKEN if temporary) and AWS_DEFAULT_REGION set Why your temp dir was empty - Your deployment pointed to local storage (inspect showed local), so the worker had nothing remote to pull. After you either switch to Git or pre-upload to S3 and point the deployment at that S3 prefix, the worker will download to a temp dir at run time. If you want, I can give you a tiny helper script that: - builds a version label (e.g., git SHA), - syncs to s3://my-bucket/prefect/my-project/releases/&lt;version>, - and updates the deployment to that exact prefix.