<@ULVA73B9P> I'm passing variables to my ECS task ...
# ask-community
b
@Marvin I'm passing variables to my ECS task definition like cpu, memory, executionRoleArn,containerDefinitions via terraform in the job_variables. The log in the flow run shows that my task definition is set up correctly but all of those variables are also getting passed as parameters to the flow function itself. This causes the following error:
Copy code
prefect.exceptions.SignatureMismatchError: Function expects parameters [] but was provided with parameters ['cpu', 'env', 'image', 'memory', 'vpc_id', 'cluster', 'execution_role_arn', 'network_configuration', 'cloudwatch_logs_options', 'configure_cloudwatch_logs']
Any suggestions?
m
thought for 429.5 seconds
It sounds like the ECS job variables are ending up in your deployment’s flow parameters. That’s what triggers the SignatureMismatchError (your flow takes no params, but the run is being created with keys like cpu, image, memory, etc.). Quick checks - Inspect the deployment to confirm what Prefect thinks the parameters are:
Copy code
prefect deployment inspect "<flow_name>/<deployment_name>"
If you see those ECS keys under "parameters", that’s the cause. Likely cause - In Prefect 3.x, ECS-specific settings belong under the work pool’s job_variables, while flow parameters are a separate field. If Terraform puts your ECS job variables into the deployment parameters (or you accidentally set them there), Prefect will pass them into the flow function. How to fix in Terraform - Ensure you are setting job_variables under the work_pool block, and do not set these under parameters. - If you don’t have flow parameters, set parameters to an empty object. Example (using snake_case keys expected by the ECS worker):
Copy code
resource "prefect_deployment" "my_dep" {
  name      = "my-deployment"
  flow_name = "my-flow"

  # Only actual flow inputs go here. If none, leave empty.
  parameters = jsonencode({})

  work_pool {
    name = "my-ecs-pool"

    # ECS configuration for the worker
    job_variables = jsonencode({
      image                = "<http://123456789012.dkr.ecr.us-east-1.amazonaws.com/myimage:latest|123456789012.dkr.ecr.us-east-1.amazonaws.com/myimage:latest>"
      cpu                  = 1024
      memory               = 2048
      cluster              = "arn:aws:ecs:us-east-1:123456789012:cluster/my-cluster"
      execution_role_arn   = "arn:aws:iam::123456789012:role/ecsTaskExecutionRole"
      task_role_arn        = "arn:aws:iam::123456789012:role/myTaskRole"
      vpc_id               = "vpc-0123456789abcdef0"
      network_configuration = {
        subnets          = ["subnet-abc", "subnet-def"]
        security_groups  = ["sg-123456"]
        assign_public_ip = true
      }
      configure_cloudwatch_logs = true
      cloudwatch_logs_options   = {
        logGroup     = "/ecs/prefect"
        streamPrefix = "prefect"
        createLogGroup = true
      }
      env = {
        PREFECT_LOGGING_LEVEL = "INFO"
      }
    })
  }
}
Notes - Use snake_case keys (e.g., execution_role_arn) for the ECS worker; camelCase like executionRoleArn/containerDefinitions won’t match the ECS worker schema. - The ECS worker builds the container definition for you from fields like image/env/cpu/memory. You typically don’t pass raw containerDefinitions. Temporary workaround - If you need to unblock immediately, make your flow accept extra kwargs so unexpected parameters are ignored:
Copy code
@flow
def my_flow(**kwargs):
    ...
But it’s better to fix the deployment so ECS settings live only in job_variables. Can you share: - Your Terraform resource for the deployment (sanitized) - Prefect version, prefect-aws version, and Terraform provider version With that I can pinpoint whether this is a config issue or a provider bug. If it’s the latter, I can open an issue with a minimal repro. Docs for reference - ECS worker guide: ECS guide - Customizing job variables: Customize job variables - Terraform resource (deployment): Terraform deployment resource