Hristo Mavrodiev
09/27/2025, 4:48 PMMarvin
09/27/2025, 4:48 PMMarvin
09/27/2025, 4:57 PMimage=...
to deploy(...)
or create the flow from a remote source
with Flow.from_source(...)
.
- If you’re using the CLI (prefect deploy
), define either image:
or a remote source:
in your prefect.yaml
.
Below are concrete examples for both paths.
Python API examples
1) Use a container image (worker will run this image)
from prefect.flows import Flow
Flow.from_source(
".", # local project root used for the image build context
entrypoint="flows/etl.py:etl"
).deploy(
name="etl-k8s",
work_pool_name="kubernetes-pool",
image="<http://ghcr.io/acme/prefect-etl:0.1.0|ghcr.io/acme/prefect-etl:0.1.0>", # image your workers can pull
build=True, # build the image
push=True # push to the registry
)
2) Use a remote git repository for code
from prefect.flows import Flow
Flow.from_source(
"<https://github.com/your-org/your-repo>", # or git@github.com:org/repo.git
entrypoint="flows/etl.py:etl"
).deploy(
name="etl-from-git",
work_pool_name="process-pool" # or any pool type
)
3) Use remote object storage (S3/GCS/Azure) for code
from prefect.filesystems import RemoteFileSystem
from prefect.flows import Flow
storage = RemoteFileSystem(basepath="<s3://my-bucket/prefect-project>") # gs://..., az://... also work
Flow.from_source(
storage,
entrypoint="flows/etl.py:etl"
).deploy(
name="etl-from-s3",
work_pool_name="process-pool"
)
CLI (prefect deploy) examples
In your project root, set up a prefect.yaml
with either a remote source or an image.
A) Remote source (git) — no image required
name: my-project
prefect-version: ">=3.0.0"
source: <https://github.com/your-org/your-repo>
deployments:
- name: etl-from-git
entrypoint: flows/etl.py:etl
work_pool:
name: process
Then run:
prefect deploy
B) Remote source (object storage)
name: my-project
prefect-version: ">=3.0.0"
source: <s3://my-bucket/prefect-project> # <gs://bucket/prefix> or <az://container/prefix> also work
deployments:
- name: etl-from-s3
entrypoint: flows/etl.py:etl
work_pool:
name: process
C) Container image (typical for Docker/Kubernetes/ECS)
name: my-project
prefect-version: ">=3.0.0"
deployments:
- name: etl-k8s
entrypoint: flows/etl.py:etl
work_pool:
name: kubernetes-pool
image: <http://ghcr.io/acme/prefect-etl:0.1.0|ghcr.io/acme/prefect-etl:0.1.0>
build: true # build the image from your local project
push: true # push it to your registry
Then:
prefect deploy
Valid ways to provide a “remote storage location”
- Git: public HTTPS or private HTTPS/SSH URLs (e.g., https://github.com/org/repo, git@github.com:org/repo.git)
- Object storage: s3://bucket/prefix, gs://bucket/prefix, az://container/prefix
- Other fsspec-compatible remotes (via RemoteFileSystem), e.g., SMB
Common pitfalls
- Local-only code with remote workers: If your workers run in Kubernetes/ECS/etc., they can’t see your laptop’s filesystem. Use an image or remote source
.
- Image not pushed: If you set image
but don’t push it to a registry your workers can access, runs will fail. Use push=True
or push the image yourself.
- Private git repo without credentials: Ensure your workers have access (SSH keys, tokens, or environment credentials).
- Incorrect entrypoint: Must be path/to/file.py:flow_function_name
.
- Mixing old 2.x patterns: In 3.x, do not use Deployment.build_from_flow()
or prefect deployment build
. Use Flow.from_source(...).deploy(...)
or prefect deploy
.
Docs and references
- Storing flow code (sources, git, object storage): Store flow code for deploymentsMarvin
09/27/2025, 4:57 PMprefect.yaml
(if using CLI) or the code snippet you’re running (if using Python)?
- Your work pool type (process, docker, kubernetes, etc.)?
I’ll tailor the fix to your setup.