David Martin Calalang
08/26/2025, 2:20 PM"pip install prefect-aws && prefect worker start --pool scalable-data-pipeline --type ecs"
What additional commands/tags should I add if I wanted to add details like a task role, cluster, vpc and whatnot. Right now I am manually adding those details in the Prefect UIMarvin
08/26/2025, 2:20 PMMarvin
08/26/2025, 2:26 PMprefect worker start
. They belong in the work pool’s base job template or per-deployment job variables. Then the worker simply polls the pool and uses those settings.
Also note: in Prefect 3.x you do not pass --type
to prefect worker start
. The worker type comes from the work pool. Use --type ecs
when creating the pool.
Here’s the recommended flow:
1) Install the AWS integration
pip install prefect-aws
2) Create or update the ECS work pool with a base job template
- First, fetch the default ECS base job template and write it to a file:
prefect work-pool get-default-base-job-template --type ecs --file ecs-base-job-template.json
- Open ecs-base-job-template.json
and fill in ECS details. Look for fields like:
- cluster (name or ARN)
- taskRoleArn
- executionRoleArn
- cpu, memory, family
- launchType or capacityProviderStrategy
- networkConfiguration.awsvpcConfiguration.subnets
- networkConfiguration.awsvpcConfiguration.securityGroups
- networkConfiguration.awsvpcConfiguration.assignPublicIp
- platformVersion
- container overrides (environment, command, etc.)
Tip: you’ll see Jinja-style variables (e.g., {{ cluster }}
) in the template. You can either hardcode values here for the pool, or keep variables and supply values per-deployment via job variables.
- Create the pool (or update an existing one) with that template:
# Create
prefect work-pool create scalable-data-pipeline --type ecs --base-job-template ecs-base-job-template.json
# Or update if the pool already exists
prefect work-pool update scalable-data-pipeline --base-job-template ecs-base-job-template.json
3) Start the worker for the pool
prefect worker start --pool scalable-data-pipeline
No --type
needed here in 3.x.
4) (Optional) Override values per-deployment via job variables
If you kept variables like {{ cluster }}
in the pool template, set them on deployments so different deployments can target different clusters, roles, or subnets. Example in Python:
from prefect import flow
@flow
def my_flow():
...
my_flow.from_source("<https://github.com/your/repo>").deploy(
name="ecs-deploy",
work_pool_name="scalable-data-pipeline",
job_variables={
"cluster": "my-ecs-cluster",
"task_role_arn": "arn:aws:iam::123456789012:role/my-task-role",
"execution_role_arn": "arn:aws:iam::123456789012:role/my-execution-role",
"subnets": ["subnet-abc", "subnet-def"],
"security_groups": ["sg-123"],
"assign_public_ip": "ENABLED",
"cpu": 1024,
"memory": 2048,
},
)
Match the keys to what you see in the base job template you exported.
5) (Optional) Use Prefect tags to route work
If you want certain workers to only pick up specific deployments, start the worker with matching tags and add the same tags to the deployment:
prefect worker start --pool scalable-data-pipeline --matching-tags ecs-prod
Then add tags=["ecs-prod"]
to your deployment configuration.
Docs for reference:
- ECS worker guide: Prefect 3 ECS guide
If you share a snippet of your exported ecs-base-job-template.json
, I can point you to the exact keys to set for your cluster, roles, and networking.David Martin Calalang
08/26/2025, 8:35 PMprefect worker start --pool foo --type ecs
and the work-pool foo doesn't exist?Marvin
08/26/2025, 8:36 PMMarvin
08/26/2025, 8:37 PMprefect worker start
will not create a work pool for you, and it does not accept --type
. If the pool doesn’t exist, you’ll get an error indicating the work pool was not found.
Use this flow instead:
# 1) Install the AWS integration
pip install prefect-aws
# 2) Create the ECS work pool (once)
prefect work-pool create foo --type ecs
# optionally set a base job template:
prefect work-pool get-default-base-job-template --type ecs --file ecs-base-job-template.json
# edit ecs-base-job-template.json with your cluster/roles/networking and then:
prefect work-pool update foo --base-job-template ecs-base-job-template.json
# 3) Start the worker for that pool
prefect worker start --pool foo
Docs:
- Workers and work pools: Workers and work pools (Prefect 3)
- ECS worker guide: ECS worker guide
If you’re actually on Prefect 2.x, behavior and flags differ. If that’s the case, let me know your version and I’ll tailor the commands.David Martin Calalang
08/26/2025, 8:57 PM"command": "{{ command }}",
"env": "{{ env }}",
"labels": "{{ labels }}",
"name": "{{ name }}",
"aws_credentials": "{{ aws_credentials }}",
"network_configuration": "{{ network_configuration }}"
if I wanted to change the exact value of "name" to foo, and I wanted to add two subnet ids to the network_configuration. how would that look hardcoded in?Marvin
08/26/2025, 8:58 PMMarvin
08/26/2025, 8:59 PM"command": "{{ command }}",
"env": "{{ env }}",
"labels": "{{ labels }}",
"name": "foo",
"aws_credentials": "{{ aws_credentials }}",
"network_configuration": {
"awsvpcConfiguration": {
"subnets": [
"subnet-0123456789abcdef0",
"subnet-0fedcba9876543210"
]
}
}
If you also want to hardcode security groups and public IP assignment, include them too:
"network_configuration": {
"awsvpcConfiguration": {
"subnets": [
"subnet-0123456789abcdef0",
"subnet-0fedcba9876543210"
],
"securityGroups": ["sg-0123456789abcdef0"],
"assignPublicIp": "ENABLED"
}
}
Notes:
- Keep the key names exactly as shown for the ECS API object: awsvpcConfiguration
, subnets
, securityGroups
, assignPublicIp
.
- After editing, apply the template with:
prefect work-pool update <your-pool> --base-job-template ecs-base-job-template.json
Docs for reference: ECS worker guideDavid Martin Calalang
08/26/2025, 9:32 PMMarvin
08/26/2025, 9:32 PMMarvin
08/26/2025, 9:34 PMjob_configuration
and variables
. If you pass only a snippet (or malformed JSON), Prefect will fall back to the default template.
- Hardcoded objects must not be quoted. Replace "{{ network_configuration }}"
with an actual JSON object (no surrounding quotes).
- The ECS container name is controlled by container_name
and the container definition’s name
— not the top-level name
property. If you only set "name": "foo"
, you won’t see the container name change in ECS.
Try this minimal working example. Start from the default template and change only the shown fields; keep the rest as-is. This is valid JSON (no comments), and shows:
- container name set to foo
- network configuration hardcoded with two subnets
```
{
"job_configuration": {
"command": "{{ command }}",
"env": "{{ env }}",
"labels": "{{ labels }}",
"name": "foo",
"aws_credentials": "{{ aws_credentials }}",
"task_definition": {
"containerDefinitions": [
{
"image": "{{ image }}",
"name": "foo"
}
],
"cpu": "{{ cpu }}",
"family": "{{ family }}",
"memory": "{{ memory }}",
"executionRoleArn": "{{ execution_role_arn }}"
},
"task_run_request": {
"launchType": "{{ launch_type }}",
"cluster": "{{ cluster }}",
"overrides": {
"containerOverrides": [
{
"name": "foo",
"command": "{{ command }}",
"environment": "{{ env }}",
"cpu": "{{ cpu }}",
"memory": "{{ memory }}"
}
],
"cpu": "{{ cpu }}",
"memory": "{{ memory }}",
"taskRoleArn": "{{ task_role_arn }}"
},
"tags": "{{ labels }}",
"taskDefinition": "{{ task_definition_arn }}",
"capacityProviderStrategy": "{{ capacity_provider_strategy }}"
},
"configure_cloudwatch_logs": "{{ configure_cloudwatch_logs }}",
"cloudwatch_logs_options": "{{ cloudwatch_logs_options }}",
"cloudwatch_logs_prefix": "{{ cloudwatch_logs_prefix }}",
"network_configuration": {
"awsvpcConfiguration": {
"subnets": [
"subnet-0123456789abcdef0",
"subnet-0fedcba9876543210"
]
}
},
"stream_output": "{{ stream_output }}",
"task_start_timeout_seconds": "{{ task_start_timeout_seconds }}",
"task_watch_poll_interval": "{{ task_watch_poll_interval }}",
"auto_deregister_task_definition": "{{ auto_deregister_task_definition }}",
"vpc_id": "{{ vpc_id }}",
"container_name": "foo",
"cluster": "{{ cluster }}",
"match_latest_revision_in_family": "{{ match_latest_revision_in_family }}"
},
"variables": {
"description": "Variables for templating an ECS job.",
"type": "object",
"properties": {
"name": { "type": "string" },
"container_name": { "type": "string" },
"image": { "type": "string" },
"cpu": { "type": ["integer","string"] },
"memory": { "type": ["integer","string"] },
"cluster": { "type": "string" },
"task_role_arn": { "type": "string" },
"execution_role_arn": { "type": "string" },
"task_definition_arn": { "type": "string" },
"family": { "type": "string" },
"labels": { "type": ["array","object","string","null"] },
"env": { "type": ["array","object","null"] },
"command": { "type": ["array","null"] },
"launch_type": { "type": ["string","null"] },
"capacity_provider_strategy": { "type": ["array","null"] },
"network_configuration": { "type": ["object","null"] },
"configure_cloudwatch_logs": { "type": ["boolean","null"] },
"cloudwatch_logs_options": { "type": ["object","null"] },
"cloudwatch_logs_prefix": { "type": ["string","null"] },
"stream_output": { "type": ["boolean","null"] },Marvin
08/26/2025, 9:34 PMApply and verify:
# Update the pool with your edited file
prefect work-pool update <your-pool> --base-job-template ecs-base-job-template.json
# Inspect what Prefect actually stored (this is key)
prefect work-pool inspect <your-pool>
```
If inspect
still shows defaults, likely causes:
- The JSON file wasn’t the full base template (missing the job_configuration
and variables
top-level keys).
- The JSON wasn’t valid (e.g., stray commas or quotes around objects), so the update was rejected and defaults remained.
- You updated a different pool than the one your worker is polling.
- Your deployment has job_variables set that override the pool defaults (deployment overrides take precedence). Check your deployment definition for keys like cluster
, network_configuration
, etc.
If you can paste the exact JSON you used (or the prefect work-pool inspect <pool>
output) and the pool name you’re starting the worker with, I’ll pinpoint the issue quickly.
Reference: ECS worker guide