<@ULVA73B9P> what is a common practice to run a fl...
# ask-marvin
f
@Marvin what is a common practice to run a flow in ECS? how should I configure prefect.yml?
m
thought for 658.6 seconds
Great question. In Prefect 3.x the common pattern to run flows on ECS is: 1) Create an ECS work pool 2) Build and publish a Docker image for your flow (usually to ECR) 3) Create a deployment that targets the ECS work pool and sets ECS-specific job variables (image, cluster, CPU/memory, networking, roles, etc.) 4) Start an ECS worker to submit tasks to your cluster Useful docs: - ECS guide (Prefect + AWS) - prefect.yaml schema and steps - You can see all ECS job variables Prefect supports with:
Copy code
prefect work-pool get-default-base-job-template --type ecs
CLI snippets (verified) - Create the work pool:
Copy code
prefect work-pool create "my-ecs-pool" --type ecs
- Start a worker that will submit ECS tasks (ensure the environment has AWS permissions and prefect[aws] installed):
Copy code
pip install "prefect[aws]"
  prefect worker start -p my-ecs-pool
A practical prefect.yaml for ECS Below is a minimal but complete example that: - Builds and pushes a Docker image to ECR during deploy - Deploys a flow to an ECS work pool - Configures cluster, roles, CPU/memory, and networking
Copy code
name: prefect-ecs-example

# Optional build/push steps to create and publish your image (run locally when you do `prefect deploy`)
build:
  - prefect.deployments.steps.utility.run_shell_script:
      script: |
        aws ecr get-login-password --region us-east-1 \
        | docker login --username AWS --password-stdin <http://123456789012.dkr.ecr.us-east-1.amazonaws.com|123456789012.dkr.ecr.us-east-1.amazonaws.com>
  - prefect_docker.deployments.steps.build_docker_image:
      id: build_image
      image_name: "<http://123456789012.dkr.ecr.us-east-1.amazonaws.com/prefect-flows|123456789012.dkr.ecr.us-east-1.amazonaws.com/prefect-flows>"
      tag: "2025-09-23"
  - prefect_docker.deployments.steps.push_docker_image:
      image_name: "{{ build_image.image_name }}"
      tag: "{{ build_image.tag }}"

# If you do NOT bake code into the image, add a pull step to fetch your code at runtime.
# If you DO bake code into the image, you can omit this 'pull' section.
# pull:
#   - prefect.deployments.steps.git_clone:
#       repository: "<https://github.com/your-org/your-repo.git>"
#       branch: "main"

deployments:
  - name: my-flow-ecs
    entrypoint: "flows/my_flow.py:my_flow"  # your flow: ./path/to/file.py:flow_func
    work_pool:
      name: "my-ecs-pool"
      # ECS job variables: use the image you built/pushed above (or hard-code a known tag)
      job_variables:
        image: "<http://123456789012.dkr.ecr.us-east-1.amazonaws.com/prefect-flows:2025-09-23|123456789012.dkr.ecr.us-east-1.amazonaws.com/prefect-flows:2025-09-23>"

        # Where to run
        cluster: "arn:aws:ecs:us-east-1:123456789012:cluster/my-ecs-cluster"
        launch_type: "FARGATE"

        # Task sizing
        cpu: "1024"        # 1 vCPU
        memory: "2048"     # 2 GB

        # IAM roles
        execution_role_arn: "arn:aws:iam::123456789012:role/ecsTaskExecutionRole"
        task_role_arn: "arn:aws:iam::123456789012:role/prefect-task-role"

        # Networking (awsvpc)
        network_configuration:
          awsvpcConfiguration:
            subnets:
              - "subnet-abc123"
              - "subnet-def456"
            securityGroups:
              - "sg-0123456789abcdef0"
            assignPublicIp: "ENABLED"

        # Logs (optional)
        configure_cloudwatch_logs: true
        cloudwatch_logs_options:
          logGroup: "/ecs/prefect"
          region: "us-east-1"

        # Container name used in overrides (must match a container in the task def)
        container_name: "prefect"

        # Stream logs to the worker process (nice for Dev/Test)
        stream_output: true
Notes and best practices - Image strategy: - Common: bake all flow code + dependencies into your Docker image (simple runtime; omit pull steps). - Alternative: keep a lightweight image and add a
pull
step (e.g.,
git_clone
) to fetch code at runtime. - AWS permissions:
- The environment running
prefect worker start
needs AWS permissions to register task definitions and run tasks. - The task execution role needs ECR read + CloudWatch logs permissions. - The task role should allow your flow to access AWS services it uses (S3, RDS, etc.). - Customize job variables: - Run
prefect work-pool get-default-base-job-template --type ecs
to see every variable you can set (e.g., capacity provider strategy, VPC ID, CloudWatch settings, auto_deregister_task_definition, etc.). - Deploy your flow: - From your project root with this prefect.yaml, run:
Copy code
prefect deploy
Would you like to bake your code into the image or pull it at runtime from Git/S3? If you share your cluster/region and whether you use public or private subnets, I can tailor the
job_variables
and steps exactly to your setup.