<@ULVA73B9P> deployments: - name: dev version: '...
# ask-marvin
m
@Marvin deployments: - name: dev version: '{{ build_image.tag }}' tags: - '{{ build_image.tag }}' description: null schedule: {} flow_name: null entrypoint: extraction/ebay/ebay.py:run parameters: {"seller_id":"","selling_partner_id":""} work_pool: name: my-ecs-pool work_queue_name: default job_variables: image: '{{ build_image.image }}' # push: false env: APP_ENV: "dev" SOURCE : "ebay" how to override ram and cpu of ecs worker task using deployments
m
thought for 157.7 seconds
You can override ECS CPU and memory per deployment by setting them in the work pool’s
job_variables
. In Prefect 3.x’s ECS worker, the keys are
cpu
and `memory`: - `cpu`: integer ECS CPU units (1024 = 1 vCPU) - `memory`: integer MiB of memory at the task level Since you’re targeting ECS EC2 (not Fargate), you can choose any values that fit your EC2 host. Here’s your deployment updated with small POC defaults:
Copy code
deployments:
- name: dev
  version: '{{ build_image.tag }}'
  tags:
  - '{{ build_image.tag }}'
  entrypoint: extraction/ebay/ebay.py:run
  parameters: {"seller_id":"","selling_partner_id":""}
  work_pool:
    name: my-ecs-pool
    work_queue_name: default
    job_variables:
      image: '{{ build_image.image }}'
      launch_type: EC2
      cpu: 256          # 0.25 vCPU
      memory: 512       # 512 MiB
      env:
        APP_ENV: "dev"
        SOURCE: "ebay"
Notes - For EC2, the above sets task-level CPU/memory. Make sure these values don’t exceed your single EC2 host’s available resources (you mentioned ~8 GB; starting with 256/512 or 512/1024 per task is reasonable for a POC). - If you need a container-level soft memory limit (memoryReservation) rather than task-level memory, you can use JSON Patch task customizations to set it on the first container definition:
Copy code
work_pool:
    name: my-ecs-pool
    work_queue_name: default
    job_variables:
      image: '{{ build_image.image }}'
      launch_type: EC2
      cpu: 256
      memory: 512
      task_customizations:
        - op: add
          path: /containerDefinitions/0/memoryReservation
          value: 512
This is optional—most users only need
cpu
and
memory
. - If you later switch to Fargate, be aware Fargate requires allowed cpu/memory combinations; with EC2 you’re not restricted to those combos. Helpful docs - Customize job variables - Prefect AWS ECS guide If you share your target per-task CPU/memory for dev/stage/prod, I can suggest values that’ll fit your single EC2 host without starving the Prefect server/worker processes.