Fernando Müller
08/18/2025, 2:57 PMMarvin
08/18/2025, 2:57 PMMarvin
08/18/2025, 3:01 PM<ACCOUNT_ID>.dkr.ecr.<REGION>.<http://amazonaws.com/test-deployment:2025-08-18|amazonaws.com/test-deployment:2025-08-18>
2) Deploy with build and push enabled:
print_random_number_flow.deploy(
name="test-deployment",
work_pool_name="ecs-push-pool",
image="<ACCOUNT_ID>.dkr.ecr.<REGION>.<http://amazonaws.com/test-deployment:2025-08-18|amazonaws.com/test-deployment:2025-08-18>",
build=True,
push=True,
job_variables={"cpu": 1024, "memory": 4096, "family": "test-deployment"},
description="Test deployment",
tags=["test", "deployment"],
)
3) Ensure your local Docker can auth to ECR before deploying.
Docs: ECS guide and Store flow code
Option B — Fetch code from Git at runtime (no image build during deploy)
1) Put your flow in a git repo and deploy “from source”:
(
print_random_number_flow
.from_source(
source="<https://github.com/<org>/<repo>.git>",
entrypoint="flows/test_deployments/test_deployments_flow.py:print_random_number_flow",
)
.deploy(
name="test-deployment",
work_pool_name="ecs-push-pool",
image="prefecthq/prefect-client:3-latest", # must include your runtime deps
build=False,
push=False,
job_variables={"cpu": 1024, "memory": 4096, "family": "test-deployment"},
)
)
2) The worker will clone the repo into the container at run time. Make sure the image has any Python deps your flow needs.
Docs: Deploy via Python and Store flow code
Quick sanity checks either way:
- In the Prefect UI, your deployment should show Storage: Git (if using from_source) or Storage: '.' and an image that actually contains your code (if baking it).
- The image shown in the deployment details must be pullable by ECS (ECR is safest).
- The entrypoint path must be correct relative to /opt/prefect inside the container.
Which route do you want to take: bake the code into an image, or pull from Git at runtime? I’ll give you the exact steps for your setup.Fernando Müller
08/18/2025, 3:13 PMMarvin
08/18/2025, 3:13 PMMarvin
08/18/2025, 3:15 PM(
print_random_number_flow
.from_source(
source="https://$GITLAB_USERNAME:$GITLAB_TOKEN@gitlab.com/<group>/<repo>.git",
entrypoint="flows/test_deployments/test_deployments_flow.py:print_random_number_flow",
# optionally pin a branch/commit if supported in your version:
# reference="main",
)
.deploy(
name="test-deployment",
work_pool_name="ecs-push-pool",
image="prefecthq/prefect-client:3-latest",
build=False,
push=False,
job_variables={"cpu": 1024, "memory": 4096, "family": "test-deployment"},
)
)
Where to put the env vars:
- Prefect UI -> your ECS work pool -> Set environment variables (recommended so every task gets them)
- Or inject them at deployment time via your pool’s base job template (if you’ve customized it)
- Or use ECS task definition secrets from AWS Secrets Manager and map them to env vars (more ops-friendly)
Alternative: SSH deploy key
- Use source="git@gitlab.com:<group>/<repo>.git"
- Ensure the container has:
- a private key at /root/.ssh/id_ed25519 (or rsa), correct file perms
- known_hosts entry for gitlab.com
- ssh client installed
- In ECS, you’ll need to inject the key (e.g., Secrets Manager -> env var -> entrypoint script that writes the key) or bake it into the image. Secure but more setup.
Another option: .netrc
- Bake a .netrc with:
- machine gitlab.com login <username> password <token>
- Works, but less flexible than env vars and awkward with Fargate unless you manage the file via image/secrets.
Sanity notes
- Don’t hardcode tokens in the deployment code. Use env vars or ECS secrets.
- If your GitLab is self-hosted with a custom CA, make sure the CA is in the image.
- Make sure your entrypoint path is relative to the repo root.
Docs for reference:
- Prefect: Deploy via Python https://docs-3.prefect.io/v3/how-to-guides/deployments/deploy-via-python
- Prefect: Store flow code https://docs-3.prefect.io/v3/how-to-guides/deployments/store-flow-code
- Prefect ECS guide https://docs-3.prefect.io/integrations/prefect-aws/ecs_guide
- GitLab PATs https://docs.gitlab.com/ee/user/profile/personal_access_tokens.html
Want me to show how to wire the env vars into your ECS work pool or set up the Secrets Manager route? I can be overly helpful, it’s my curse.Fernando Müller
08/19/2025, 1:31 PMMarvin
08/19/2025, 1:31 PMMarvin
08/19/2025, 1:32 PM# Pick a base with Prefect installed
FROM prefecthq/prefect:3-python3.11
# Workdir where Prefect expects code
WORKDIR /opt/prefect
# Install dependencies first for better cache use
# Support either requirements.txt or pyproject/poetry
COPY requirements.txt* pyproject.toml* poetry.lock* ./
RUN --mount=type=cache,target=/root/.cache/pip \
if [ -f requirements.txt ]; then \
pip install --no-cache-dir -r requirements.txt; \
elif [ -f pyproject.toml ]; then \
pip install --no-cache-dir poetry && \
poetry config virtualenvs.create false && \
poetry install --no-interaction --no-ansi; \
else \
echo "No requirements.txt or pyproject.toml found"; \
fi
# Optional: copy any runtime assets if you need them in the image
# COPY . .
2) Deploy with build=True, push=True, and target your registry (e.g., ECR):
(
print_random_number_flow
.from_source(
source="https://$GITLAB_USERNAME:$GITLAB_TOKEN@gitlab.com/<group>/<repo>.git",
entrypoint="flows/test_deployments/test_deployments_flow.py:print_random_number_flow",
# reference="main", # optionally pin branch/commit
)
.deploy(
name="test-deployment",
work_pool_name="ecs-push-pool",
image="<ACCOUNT_ID>.dkr.ecr.<REGION>.<http://amazonaws.com/test-deployment:2025-08-18|amazonaws.com/test-deployment:2025-08-18>",
build=True,
push=True,
job_variables={"cpu": 1024, "memory": 4096, "family": "test-deployment"},
)
)
B) Prebuild yourself (more control)
- Build and push with docker/BuildKit however you like (use build args, secrets, private indexes, etc.).
- Set image="<your-registry>/<repo>:tag"
, build=False
, push=False
in deploy.
Notes and gotchas:
- from_source only fetches your code; it does not install Python packages at run time. Your image must already have them.
- If you need private package indexes, add PIP_INDEX_URL/PIP_EXTRA_INDEX_URL (and any auth) as build args and envs in your Dockerfile; or prebuild outside Prefect to pass secrets securely.
- Keep your Docker context lean with a .dockerignore.
- Base images: prefecthq/prefect:3-python3.11
is a solid default; prefecthq/prefect-client:3-latest
is very bare.
Docs:
- Deploy via Python: https://docs-3.prefect.io/v3/how-to-guides/deployments/deploy-via-python
- Store flow code: https://docs-3.prefect.io/v3/how-to-guides/deployments/store-flow-code
- ECS guide: https://docs-3.prefect.io/integrations/prefect-aws/ecs_guide
If you want, tell me your dependency file setup (requirements vs pyproject) and your registry (ECR/GHCR/other), and I’ll hand you a Dockerfile + deploy snippet tailored to it.Fernando Müller
08/19/2025, 2:23 PMMarvin
08/19/2025, 2:23 PMMarvin
08/19/2025, 2:25 PMEXTRA_PIP_PACKAGES
and the container will pip install
them on startup. It’s handy for small additions; for anything hefty, bake them into the image to avoid slow cold starts and flaky installs.
How to set it for ECS via job_variables on your deployment:
(
print_random_number_flow
.from_source(
source="https://$GITLAB_USERNAME:$GITLAB_TOKEN@gitlab.com/<group>/<repo>.git",
entrypoint="flows/test_deployments/test_deployments_flow.py:print_random_number_flow",
# reference="main",
)
.deploy(
name="test-deployment",
work_pool_name="ecs-push-pool",
image="prefecthq/prefect:3-python3.11", # use the full Prefect image (not the client-only image)
build=False,
push=False,
job_variables={
"cpu": 1024,
"memory": 4096,
"family": "test-deployment",
"env": [
{"name": "EXTRA_PIP_PACKAGES", "value": "pandas==2.2.2 pyarrow s3fs"},
# If you use a private index:
# {"name": "PIP_INDEX_URL", "value": "https://<user>:<pass>@<index>/simple"},
# {"name": "PIP_EXTRA_INDEX_URL", "value": "https://<...>"},
],
},
)
)
Notes
- Requires using a Prefect runtime image (e.g., prefecthq/prefect:3-*
). The prefect-client
image is too minimal and won’t run the helper that installs extras.
- This installs at container start, so it adds time to each cold start and needs outbound internet. If your VPC blocks egress or the wheels need build tooling, it may fail—bake them instead.
- You can also set EXTRA_PIP_PACKAGES
at the ECS work pool level so all flows get it by default.
Docs:
- ECS guide: https://docs-3.prefect.io/integrations/prefect-aws/ecs_guide
- Store flow code / from_source: https://docs-3.prefect.io/v3/how-to-guides/deployments/store-flow-code
If you tell me the packages you need and your network constraints, I’ll tell you whether this env-var route is safe or if you should bake a custom image.Fernando Müller
08/19/2025, 3:23 PMMarvin
08/19/2025, 3:23 PMMarvin
08/19/2025, 3:24 PM(
print_random_number_flow
.from_source(
source="https://$GITLAB_USERNAME:$GITLAB_TOKEN@gitlab.com/<group>/<repo>.git",
entrypoint="flows/test_deployments/test_deployments_flow.py:print_random_number_flow",
# reference="main",
)
.deploy(
name="test-deployment",
work_pool_name="ecs-push-pool",
image="prefecthq/prefect:3-python3.11", # use a Prefect runtime image
build=False,
push=False,
job_variables={
"cpu": 1024,
"memory": 4096,
"family": "test-deployment",
"env": [
{
"name": "EXTRA_PIP_PACKAGES",
"value": "pandas==2.2.2 pyarrow s3fs boto3"
},
# Optional: private index config
# {"name": "PIP_INDEX_URL", "value": "https://<user>:<pass>@<index>/simple"},
# {"name": "PIP_EXTRA_INDEX_URL", "value": "https://<...>"},
],
},
)
)
Tips
- Use a Prefect runtime image (prefecthq/prefect:3-*) so the “install extras on start” hook exists.
- This installs on cold start; for large deps, bake them into the image to avoid slow starts and network flakiness.
- You can set EXTRA_PIP_PACKAGES once at the ECS work pool level if you want it for all deployments.
Refs:
- ECS guide: https://docs-3.prefect.io/integrations/prefect-aws/ecs_guide
- Store flow code / from_source: https://docs-3.prefect.io/v3/how-to-guides/deployments/store-flow-code