<@ULVA73B9P> I'm creating a work pool through the ...
# ask-marvin
d
@Marvin I'm creating a work pool through the CLI with the following command `
Copy code
"pip install prefect-aws && prefect worker start --pool scalable-data-pipeline --type ecs"
What additional commands/tags should I add if I wanted to add details like a task role, cluster, vpc and whatnot. Right now I am manually adding those details in the Prefect UI
m
thought for 366.7 seconds
Great question. In Prefect 3.x, ECS-specific settings (cluster, task role, subnets/security groups, etc.) are not passed to
prefect worker start
. They belong in the work pool’s base job template or per-deployment job variables. Then the worker simply polls the pool and uses those settings. Also note: in Prefect 3.x you do not pass
--type
to
prefect worker start
. The worker type comes from the work pool. Use
--type ecs
when creating the pool. Here’s the recommended flow: 1) Install the AWS integration
Copy code
pip install prefect-aws
2) Create or update the ECS work pool with a base job template - First, fetch the default ECS base job template and write it to a file:
Copy code
prefect work-pool get-default-base-job-template --type ecs --file ecs-base-job-template.json
- Open
ecs-base-job-template.json
and fill in ECS details. Look for fields like: - cluster (name or ARN) - taskRoleArn - executionRoleArn - cpu, memory, family - launchType or capacityProviderStrategy - networkConfiguration.awsvpcConfiguration.subnets - networkConfiguration.awsvpcConfiguration.securityGroups - networkConfiguration.awsvpcConfiguration.assignPublicIp - platformVersion - container overrides (environment, command, etc.) Tip: you’ll see Jinja-style variables (e.g.,
{{ cluster }}
) in the template. You can either hardcode values here for the pool, or keep variables and supply values per-deployment via job variables. - Create the pool (or update an existing one) with that template:
Copy code
# Create
prefect work-pool create scalable-data-pipeline --type ecs --base-job-template ecs-base-job-template.json

# Or update if the pool already exists
prefect work-pool update scalable-data-pipeline --base-job-template ecs-base-job-template.json
3) Start the worker for the pool
Copy code
prefect worker start --pool scalable-data-pipeline
No
--type
needed here in 3.x. 4) (Optional) Override values per-deployment via job variables If you kept variables like
{{ cluster }}
in the pool template, set them on deployments so different deployments can target different clusters, roles, or subnets. Example in Python:
Copy code
from prefect import flow

@flow
def my_flow():
    ...

my_flow.from_source("<https://github.com/your/repo>").deploy(
    name="ecs-deploy",
    work_pool_name="scalable-data-pipeline",
    job_variables={
        "cluster": "my-ecs-cluster",
        "task_role_arn": "arn:aws:iam::123456789012:role/my-task-role",
        "execution_role_arn": "arn:aws:iam::123456789012:role/my-execution-role",
        "subnets": ["subnet-abc", "subnet-def"],
        "security_groups": ["sg-123"],
        "assign_public_ip": "ENABLED",
        "cpu": 1024,
        "memory": 2048,
    },
)
Match the keys to what you see in the base job template you exported. 5) (Optional) Use Prefect tags to route work If you want certain workers to only pick up specific deployments, start the worker with matching tags and add the same tags to the deployment:
Copy code
prefect worker start --pool scalable-data-pipeline --matching-tags ecs-prod
Then add
tags=["ecs-prod"]
to your deployment configuration. Docs for reference: - ECS worker guide: Prefect 3 ECS guide If you share a snippet of your exported
ecs-base-job-template.json
, I can point you to the exact keys to set for your cluster, roles, and networking.
d
@Marvin Does Prefect automatically create a work-pool if I run
prefect worker start --pool foo --type ecs
and the work-pool foo doesn't exist?
m
thought for 109.3 seconds
Short answer: No. In Prefect 3.x,
prefect worker start
will not create a work pool for you, and it does not accept
--type
. If the pool doesn’t exist, you’ll get an error indicating the work pool was not found. Use this flow instead:
Copy code
# 1) Install the AWS integration
pip install prefect-aws

# 2) Create the ECS work pool (once)
prefect work-pool create foo --type ecs

# optionally set a base job template:
prefect work-pool get-default-base-job-template --type ecs --file ecs-base-job-template.json
# edit ecs-base-job-template.json with your cluster/roles/networking and then:
prefect work-pool update foo --base-job-template ecs-base-job-template.json

# 3) Start the worker for that pool
prefect worker start --pool foo
Docs: - Workers and work pools: Workers and work pools (Prefect 3) - ECS worker guide: ECS worker guide If you’re actually on Prefect 2.x, behavior and flags differ. If that’s the case, let me know your version and I’ll tailor the commands.
d
@Marvin Here's a snippet from the template .json..
Copy code
"command": "{{ command }}",
    "env": "{{ env }}",
    "labels": "{{ labels }}",
    "name": "{{ name }}",
    "aws_credentials": "{{ aws_credentials }}",
    "network_configuration": "{{ network_configuration }}"
if I wanted to change the exact value of "name" to foo, and I wanted to add two subnet ids to the network_configuration. how would that look hardcoded in?
m
thought for 69.2 seconds
You’ll replace the template variables with literal values in the base job template JSON. For your two changes: - Set the container/task name to a fixed string "foo" - Hardcode two subnets in the awsvpc configuration Example snippet with just those two hardcoded and the rest left as variables:
Copy code
"command": "{{ command }}",
"env": "{{ env }}",
"labels": "{{ labels }}",
"name": "foo",
"aws_credentials": "{{ aws_credentials }}",
"network_configuration": {
  "awsvpcConfiguration": {
    "subnets": [
      "subnet-0123456789abcdef0",
      "subnet-0fedcba9876543210"
    ]
  }
}
If you also want to hardcode security groups and public IP assignment, include them too:
Copy code
"network_configuration": {
  "awsvpcConfiguration": {
    "subnets": [
      "subnet-0123456789abcdef0",
      "subnet-0fedcba9876543210"
    ],
    "securityGroups": ["sg-0123456789abcdef0"],
    "assignPublicIp": "ENABLED"
  }
}
Notes: - Keep the key names exactly as shown for the ECS API object:
awsvpcConfiguration
,
subnets
,
securityGroups
,
assignPublicIp
. - After editing, apply the template with:
Copy code
prefect work-pool update <your-pool> --base-job-template ecs-base-job-template.json
Docs for reference: ECS worker guide
d
@Marvin This doesn't seem to work. I tried creating a new work-pool with the base template, and I also tried creating a work-pool with the default template, then update with the new .json file. Both result in a new work-pool, but none of the custom configurations I wanted, just all default.
m
thought for 107.1 seconds
Thanks for the detail — a couple of gotchas commonly cause this: - The base job template must be the full JSON with top-level keys
job_configuration
and
variables
. If you pass only a snippet (or malformed JSON), Prefect will fall back to the default template. - Hardcoded objects must not be quoted. Replace
"{{ network_configuration }}"
with an actual JSON object (no surrounding quotes). - The ECS container name is controlled by
container_name
and the container definition’s
name
— not the top-level
name
property. If you only set
"name": "foo"
, you won’t see the container name change in ECS. Try this minimal working example. Start from the default template and change only the shown fields; keep the rest as-is. This is valid JSON (no comments), and shows: - container name set to foo - network configuration hardcoded with two subnets ``` { "job_configuration": { "command": "{{ command }}", "env": "{{ env }}", "labels": "{{ labels }}", "name": "foo", "aws_credentials": "{{ aws_credentials }}", "task_definition": { "containerDefinitions": [ { "image": "{{ image }}", "name": "foo" } ], "cpu": "{{ cpu }}", "family": "{{ family }}", "memory": "{{ memory }}", "executionRoleArn": "{{ execution_role_arn }}" }, "task_run_request": { "launchType": "{{ launch_type }}", "cluster": "{{ cluster }}", "overrides": { "containerOverrides": [ { "name": "foo", "command": "{{ command }}", "environment": "{{ env }}", "cpu": "{{ cpu }}", "memory": "{{ memory }}" } ], "cpu": "{{ cpu }}", "memory": "{{ memory }}", "taskRoleArn": "{{ task_role_arn }}" }, "tags": "{{ labels }}", "taskDefinition": "{{ task_definition_arn }}", "capacityProviderStrategy": "{{ capacity_provider_strategy }}" }, "configure_cloudwatch_logs": "{{ configure_cloudwatch_logs }}", "cloudwatch_logs_options": "{{ cloudwatch_logs_options }}", "cloudwatch_logs_prefix": "{{ cloudwatch_logs_prefix }}", "network_configuration": { "awsvpcConfiguration": { "subnets": [ "subnet-0123456789abcdef0", "subnet-0fedcba9876543210" ] } }, "stream_output": "{{ stream_output }}", "task_start_timeout_seconds": "{{ task_start_timeout_seconds }}", "task_watch_poll_interval": "{{ task_watch_poll_interval }}", "auto_deregister_task_definition": "{{ auto_deregister_task_definition }}", "vpc_id": "{{ vpc_id }}", "container_name": "foo", "cluster": "{{ cluster }}", "match_latest_revision_in_family": "{{ match_latest_revision_in_family }}" }, "variables": { "description": "Variables for templating an ECS job.", "type": "object", "properties": { "name": { "type": "string" }, "container_name": { "type": "string" }, "image": { "type": "string" }, "cpu": { "type": ["integer","string"] }, "memory": { "type": ["integer","string"] }, "cluster": { "type": "string" }, "task_role_arn": { "type": "string" }, "execution_role_arn": { "type": "string" }, "task_definition_arn": { "type": "string" }, "family": { "type": "string" }, "labels": { "type": ["array","object","string","null"] }, "env": { "type": ["array","object","null"] }, "command": { "type": ["array","null"] }, "launch_type": { "type": ["string","null"] }, "capacity_provider_strategy": { "type": ["array","null"] }, "network_configuration": { "type": ["object","null"] }, "configure_cloudwatch_logs": { "type": ["boolean","null"] }, "cloudwatch_logs_options": { "type": ["object","null"] }, "cloudwatch_logs_prefix": { "type": ["string","null"] }, "stream_output": { "type": ["boolean","null"] },
"task_start_timeout_seconds": { "type": ["integer","null"] }, "task_watch_poll_interval": { "type": ["number","null"] }, "auto_deregister_task_definition": { "type": ["boolean","null"] }, "vpc_id": { "type": ["string","null"] }, "aws_credentials": { "type": ["object","null"] }, "match_latest_revision_in_family": { "type": ["boolean","null"] } } } }
Copy code
Apply and verify:
# Update the pool with your edited file prefect work-pool update <your-pool> --base-job-template ecs-base-job-template.json # Inspect what Prefect actually stored (this is key) prefect work-pool inspect <your-pool> ``` If
inspect
still shows defaults, likely causes: - The JSON file wasn’t the full base template (missing the
job_configuration
and
variables
top-level keys). - The JSON wasn’t valid (e.g., stray commas or quotes around objects), so the update was rejected and defaults remained. - You updated a different pool than the one your worker is polling. - Your deployment has job_variables set that override the pool defaults (deployment overrides take precedence). Check your deployment definition for keys like
cluster
,
network_configuration
, etc. If you can paste the exact JSON you used (or the
prefect work-pool inspect <pool>
output) and the pool name you’re starting the worker with, I’ll pinpoint the issue quickly. Reference: ECS worker guide