Arthur Ren
10/05/2025, 10:03 PMMarvin
10/05/2025, 10:03 PMMarvin
10/05/2025, 10:10 PM# pip install "prefect[aws]" # ensures s3fs/boto3 are available
from prefect import Flow
# This will package your local project (honoring .prefectignore)
# and upload it under the given S3 prefix. Workers will pull and run from there.
Flow.from_source(
source="<s3://my-bucket/prefect/my-project>", # S3 bucket/prefix for your code package
entrypoint="flows/my_flow.py:my_flow", # path to your flow function
).deploy(
name="prod",
work_pool_name="your-work-pool", # e.g., process, docker, kubernetes, etc.
# push=True is the default; set explicitly if you want
push=True,
)
Alternative: pass an explicit RemoteFileSystem
from prefect import Flow
from prefect.filesystems import RemoteFileSystem
# Prefer IAM roles or environment-based credentials; this shows explicit settings for completeness.
fs = RemoteFileSystem(
basepath="<s3://my-bucket/prefect/my-project>",
# Typically unnecessary if you’re using IAM roles, ~/.aws/credentials, or env vars.
# settings={"key": "...", "secret": "...", "client_kwargs": {"region_name": "us-east-1"}},
)
Flow.from_source(
source=fs,
entrypoint="flows/my_flow.py:my_flow",
).deploy(
name="prod",
work_pool_name="your-work-pool",
push=True,
)
Notes and gotchas
- Removed in 3.x: do NOT use Deployment.build_from_flow()
or prefect deployment build
. Use Flow.from_source(...).deploy(...)
or prefect deploy
.
- Packaging: Prefect bundles your project directory (honoring .prefectignore
) and pushes a versioned artifact to S3 under the prefix you provide.
- Credentials: Ensure your worker can read from S3 via one of:
- IAM role on the worker
- AWS env vars (AWS_ACCESS_KEY_ID
, AWS_SECRET_ACCESS_KEY
, AWS_DEFAULT_REGION
)
- AWS profile (AWS_PROFILE
)
- Dependencies: Install S3 support
- pip install "prefect[aws]"
(recommended) or ensure s3fs
and boto3
are available
CLI workflow
- The CLI (prefect deploy
) reads project config from your project and deploys. The simplest path for S3 in 3.x is to call the Python API above (e.g., a small deploy.py
) — it’s the most explicit and avoids guessing flags.
- If you prefer strictly-CLI, you can still run the same Python call via one-liner:
uv run --with 'prefect[aws]' python -c "from prefect import Flow; Flow.from_source('<s3://my-bucket/prefect/my-project>','flows/my_flow.py:my_flow').deploy(name='prod', work_pool_name='your-work-pool')"
If you can share:
- your worker type (process, docker, kubernetes),
- how you’ve been authenticating to AWS in 2.x,
I can tailor the S3 path/credentials examples and a .prefectignore suited to your repo.Arthur Ren
10/05/2025, 11:11 PMMarvin
10/05/2025, 11:11 PMMarvin
10/05/2025, 11:11 PMpip install "prefect[aws]"
2) Set AWS env vars (on both deploy and worker machines)
export AWS_ACCESS_KEY_ID=YOUR_KEY
export AWS_SECRET_ACCESS_KEY=YOUR_SECRET
# Only if using temporary creds:
export AWS_SESSION_TOKEN=YOUR_SESSION_TOKEN
export AWS_DEFAULT_REGION=us-east-1
3) Create a process work pool
prefect work-pool create process-pool --type process
4) Example flow and deployment
flows/my_flow.py
from prefect import flow, get_run_logger
@flow
def my_flow():
log = get_run_logger()
<http://log.info|log.info>("Hello from S3-backed deployment!")
if __name__ == "__main__":
my_flow()
deploy.py
from prefect import Flow
# This packages your local project (honoring .prefectignore) and pushes it to S3
Flow.from_source(
source="<s3://my-bucket/prefect/my-project>", # bucket/prefix to store your code
entrypoint="flows/my_flow.py:my_flow", # path to your flow function
).deploy(
name="prod",
work_pool_name="process-pool",
push=True, # ensures your code is uploaded
)
Optional .prefectignore (to keep bundles small)
.venv
__pycache__/
*.pyc
.git/
.env
*.ipynb
data/
Run the deploy
python deploy.py
# or ensure the AWS/S3 extra is present:
uv run --with 'prefect[aws]' python deploy.py
5) Start a process worker
- Make sure the worker environment also has the AWS env vars and prefect[aws] installed so it can pull from S3.
prefect worker start --pool process-pool
# or:
uv run --with 'prefect[aws]' prefect worker start --pool process-pool
What happens under the hood
- Flow.from_source(..., source='s3://...')
tells Prefect to package your project and upload it to that S3 prefix; Prefect creates a versioned subdirectory for each deploy.
- The process worker downloads and executes that bundle when runs are submitted.
If you share your bucket name/prefix and desired region, I can tailor the code paths and a hardened .prefectignore for your repo.Arthur Ren
10/05/2025, 11:13 PMMarvin
10/05/2025, 11:13 PMMarvin
10/05/2025, 11:14 PM.
├─ flows/
│ └─ my_flow.py # defines @flow def my_flow()
└─ deploy.py
Then your entrypoint must be:
entrypoint="flows/my_flow.py:my_flow"
- If your code lives under src/, the entrypoint should include that:
entrypoint="src/flows/my_flow.py:my_flow"
2) Run deploy from the project root
- Prefect packages the current working directory by default. If you ran python deploy.py
from a subfolder, your entrypoint path and packaged layout won’t match.
- Re-run from the repo root:
uv run --with 'prefect[aws]' python deploy.py
3) Make sure .prefectignore didn’t exclude your flow file or its parent directories
- Keep it minimal until things work:
.venv
__pycache__/
*.pyc
.git/
.env
data/
- Avoid patterns like flows/
or *.py
that would remove your entrypoint.
4) Inspect the deployment to confirm what Prefect stored
- This shows the entrypoint and storage it will use:
prefect deployment inspect "<flow_name>/<deployment_name>"
Verify the “entrypoint” matches the path you expect inside the package.
5) Re-deploy after corrections
- Example:
from prefect import Flow
Flow.from_source(
source="<s3://my-bucket/prefect/my-project>",
entrypoint="flows/my_flow.py:my_flow",
).deploy(
name="prod",
work_pool_name="process-pool",
push=True,
)
6) Worker environment
- Since you’re using a process worker, ensure on the worker host:
pip install "prefect[aws]"
export AWS_ACCESS_KEY_ID=...
export AWS_SECRET_ACCESS_KEY=...
export AWS_SESSION_TOKEN=... # if using temporary creds
export AWS_DEFAULT_REGION=us-east-1
prefect worker start --pool process-pool
If it still fails, can you share:
- Your repo tree (top-level only is fine)
- The exact entrypoint string you used
- Your .prefectignore
- The output of prefect deployment inspect "<flow_name>/<deployment_name>"
With that, I can pinpoint the mismatch and get you unblocked quickly.Arthur Ren
10/05/2025, 11:24 PMMarvin
10/05/2025, 11:24 PMMarvin
10/05/2025, 11:25 PMprefect deployment inspect "<flow_name>/<deployment_name>"
Confirm you see storage that points at s3://my-bucket/prefix (not local).
Verify S3 artifacts exist
- On the machine where you ran deploy, confirm objects were created under your prefix:
aws s3 ls <s3://my-bucket/prefect/my-project/> --recursive | head -50
You should see a set of files/folders created at deploy time. If you see nothing, the deploy didn’t push.
Re-deploy with explicit S3 and logs
- Ensure aws deps and creds on the deploy machine first:
pip install "prefect[aws]"
export AWS_ACCESS_KEY_ID=...
export AWS_SECRET_ACCESS_KEY=...
export AWS_SESSION_TOKEN=... # if using temporary creds
export AWS_DEFAULT_REGION=us-east-1
export PREFECT_LOGGING_LEVEL=DEBUG
- Minimal deploy script:
from prefect import Flow
Flow.from_source(
source="<s3://my-bucket/prefect/my-project>",
entrypoint="flows/my_flow.py:my_flow",
).deploy(
name="prod",
work_pool_name="process-pool",
push=True,
)
- Run it and look for logs like “Uploading project to s3://...” and counts of files uploaded.
Verify the worker can read S3
- On the worker host:
pip install "prefect[aws]"
export AWS_ACCESS_KEY_ID=...
export AWS_SECRET_ACCESS_KEY=...
export AWS_SESSION_TOKEN=... # if needed
export AWS_DEFAULT_REGION=us-east-1
export PREFECT_LOGGING_LEVEL=DEBUG
aws s3 ls <s3://my-bucket/prefect/my-project/> --recursive | head -50
prefect worker start --pool process-pool
If the AWS CLI can list objects but the worker still pulls nothing, it’s usually missing the prefect[aws] extra or your deployment isn’t actually pointing to S3.
Check .prefectignore
- Make sure you didn’t exclude your flow files or their parent directories. Start minimal:
.venv
__pycache__/
*.pyc
.git/
.env
data/
If it still doesn’t work, please share:
- The output of prefect deployment inspect "<flow>/<deployment>"
- A quick aws s3 ls <s3://my-bucket/prefect/my-project/> --recursive | head -50
- The deploy logs (with PREFECT_LOGGING_LEVEL=DEBUG)
- Your project tree (top-level) and the exact entrypoint string
With that, I can pinpoint whether it’s an upload, storage config, or permissions issue and get you unblocked.Arthur Ren
10/05/2025, 11:28 PMMarvin
10/05/2025, 11:29 PMMarvin
10/05/2025, 11:29 PMmy_flow.deploy(...)
on an in-memory flow defaults to Local storage
- Or running prefect deploy
without configuring a remote source in your project
- Either way, the deployment references your local filesystem paths, which do not exist on the worker
Fix: re-deploy with S3 as the source
Use the Python API to explicitly set S3 as the source, then deploy. This updates the deployment to remote storage and pushes your code package to S3.
deploy.py
from prefect import Flow
Flow.from_source(
source="<s3://my-bucket/prefect/my-project>", # S3 bucket/prefix for your code package
entrypoint="flows/my_flow.py:my_flow", # relative to your project root
).deploy(
name="prod",
work_pool_name="process-pool",
push=True, # ensures your project is uploaded to S3
)
Run it (from your project root) with AWS creds present and the S3 extra installed:
pip install "prefect[aws]"
export AWS_ACCESS_KEY_ID=...
export AWS_SECRET_ACCESS_KEY=...
export AWS_SESSION_TOKEN=... # only if using temp creds
export AWS_DEFAULT_REGION=us-east-1
export PREFECT_LOGGING_LEVEL=DEBUG # optional, helpful logs
python deploy.py
# or
uv run --with 'prefect[aws]' python deploy.py
Verify it updated
prefect deployment inspect "<flow_name>/prod"
You should now see storage pointing at your s3:// prefix and a relative entrypoint (not a local absolute path).
Start the process worker (with AWS creds and prefect[aws] on the worker host)
pip install "prefect[aws]"
export AWS_ACCESS_KEY_ID=...
export AWS_SECRET_ACCESS_KEY=...
export AWS_SESSION_TOKEN=... # if applicable
export AWS_DEFAULT_REGION=us-east-1
prefect worker start --pool process-pool
Optional cleanup
- If you want to avoid confusion with the old local deployment, you can delete it:
prefect deployment delete "<flow_name>/<old_deployment_name>"
If you’d like, share how you originally created the deployment (code or CLI command) and I’ll point to the minimal change needed.Arthur Ren
10/05/2025, 11:29 PMMarvin
10/05/2025, 11:29 PMMarvin
10/05/2025, 11:30 PMpush=True
. That upload step runs where you execute the deploy code (not on the worker).
Do this from your project root:
1) Ensure S3 support and AWS env vars are set on the deploy machine
pip install "prefect[aws]"
export AWS_ACCESS_KEY_ID=YOUR_KEY
export AWS_SECRET_ACCESS_KEY=YOUR_SECRET
# If temporary creds:
export AWS_SESSION_TOKEN=YOUR_SESSION_TOKEN
export AWS_DEFAULT_REGION=us-east-1
2) Minimal deploy script that UPLOADS your project to S3
deploy.py
from prefect import Flow
Flow.from_source(
source="<s3://my-bucket/prefect/my-project>", # where to upload your packaged code
entrypoint="flows/my_flow.py:my_flow", # relative path to your flow in the repo
).deploy(
name="prod",
work_pool_name="process-pool",
push=True, # this triggers the upload to S3
)
3) Run the deploy (this is the upload step)
uv run --with 'prefect[aws]' python deploy.py
# or: python deploy.py (if prefect[aws] already installed)
4) Verify it was uploaded
aws s3 ls <s3://my-bucket/prefect/my-project/> --recursive | head -50
You should see a versioned folder with your packaged project files.
Important notes
- If prefect deployment inspect "<flow>/<deployment>"
shows local paths, that deployment was created without a remote source. Re-run the deploy above to convert it to S3 and upload your code.
- The worker will only download later at run time; it never performs the upload. Make sure your upload (deploy) runs on a machine with access to your source code and AWS credentials.
- Keep .prefectignore
from excluding your flow files so they’re included in the uploaded package.
If you prefer a CLI one-liner for the same upload action:
uv run --with 'prefect[aws]' python -c "from prefect import Flow; Flow.from_source('<s3://my-bucket/prefect/my-project>','flows/my_flow.py:my_flow').deploy(name='prod', work_pool_name='process-pool', push=True)"
If you share your bucket/prefix and project tree top-level, I can confirm the exact entrypoint
and S3 path you should use.Arthur Ren
10/05/2025, 11:37 PMMarvin
10/05/2025, 11:37 PMMarvin
10/05/2025, 11:37 PMfrom prefect import Flow
Flow.from_source(
source="<https://github.com/your-org/your-repo>",
entrypoint="flows/my_flow.py:my_flow",
).deploy(
name="prod",
work_pool_name="process-pool",
)
B) If you must use S3: upload your project to S3 yourself, then point the deployment at that S3 path
1) Sync your project to a versioned S3 prefix (from your repo root):
# minimal excludes; adjust as needed
aws s3 sync . <s3://my-bucket/prefect/my-project/releases/$>(git rev-parse --short HEAD) \
--exclude ".git/*" --exclude ".venv/*" --exclude "__pycache__/*" --exclude "*.pyc" --exclude ".env"
2) Create the deployment that references that uploaded directory:
from prefect import Flow
Flow.from_source(
source="<s3://my-bucket/prefect/my-project/releases/<your-commit-or-version>>",
entrypoint="flows/my_flow.py:my_flow",
).deploy(
name="prod",
work_pool_name="process-pool",
)
3) Ensure your worker has:
- prefect[aws] installed
- AWS_ACCESS_KEY_ID / AWS_SECRET_ACCESS_KEY (/ AWS_SESSION_TOKEN if temporary) and AWS_DEFAULT_REGION set
Why your temp dir was empty
- Your deployment pointed to local storage (inspect showed local), so the worker had nothing remote to pull. After you either switch to Git or pre-upload to S3 and point the deployment at that S3 prefix, the worker will download to a temp dir at run time.
If you want, I can give you a tiny helper script that:
- builds a version label (e.g., git SHA),
- syncs to s3://my-bucket/prefect/my-project/releases/<version>,
- and updates the deployment to that exact prefix.