<@ULVA73B9P> I'm running Prefect tasks through an ...
# ask-marvin
d
@Marvin I'm running Prefect tasks through an AWS ECS hosted server and prefect worker. How do I configure cloudwatch logs? I want to have it so that it creates log streams for each new deployment run, all under a pre-existing log group called
deployment-outputs
. How do I do this through the work pool configurations (Cloudwatch Logs Options)
m
thought for 127.9 seconds
Short answer: set the ECS work pool’s CloudWatch Logs options to point at your existing group and give a stream prefix. ECS will create a fresh log stream per task/run automatically. In Prefect 3.x (assuming that’s what you’re on), do this in the UI: - Work Pools -> your ECS pool -> Edit -> CloudWatch Logs Options: - Log group: deployment-outputs - Stream prefix: something like flow-run or prefect (your choice) - Create log group: off (since your group already exists) - Region: match your cluster’s region (often auto-resolved, but set it if shown) What you get under the hood is the ECS container definition’s logConfiguration:
Copy code
"logConfiguration": {
  "logDriver": "awslogs",
  "options": {
    "awslogs-group": "deployment-outputs",
    "awslogs-region": "us-east-1",
    "awslogs-stream-prefix": "flow-run"
  }
}
Notes and gotchas: - ECS names streams as: {stream-prefix}/{container-name}/{ecs-task-id}. Each flow run (task) produces a new stream automatically—no extra wiring needed. - Ensure the task execution role on your ECS tasks has: - logs:CreateLogStream - logs:PutLogEvents - If you ever flip “Create log group” on, also logs:CreateLogGroup - If you prefer editing the base job template instead of the friendly fields, add the logConfiguration above to the containerDefinitions in the task definition. Docs for deeper reference: - Prefect AWS ECS guide - AWS ECS CloudWatch Logs (awslogs) If you tell me your region and whether you’re on Fargate or EC2-backed ECS, I can sketch the exact base job template snippet you’d drop in. Otherwise, enjoy your orderly pile of log streams.
d
@Marvin Show me what json I should pass into the
CloudWatch Logs Options
m
thought for 32.1 seconds
Paste this into CloudWatch Logs Options:
Copy code
{
  "awslogs-group": "deployment-outputs",
  "awslogs-region": "us-east-1",
  "awslogs-stream-prefix": "flow-run",
  "awslogs-create-group": "false"
}
Notes: - Replace
us-east-1
with your cluster’s region. -
awslogs-create-group
is a string; leave it
"false"
since your group already exists. Set
"true"
if you want ECS to create it. - ECS will create a new stream per task/run automatically:
flow-run/<container-name>/<ecs-task-id>
. Make sure your task execution role has
logs:CreateLogStream
and
logs:PutLogEvents
. If you ever set create-group to true, add
logs:CreateLogGroup
. Reference: Prefect AWS ECS guide
d
@Marvin Does this create a new log stream for each deployment run? And if it does is there a way to dynamically name each new log stream created? For example
flow-run
+ the flow id
m
thought for 25.8 seconds
Short version: - Yes: with the awslogs driver, ECS creates a new CloudWatch log stream for each ECS task (i.e., each flow run). - No: you can’t fully control the exact stream name. ECS names streams as: {awslogs-stream-prefix}/{container-name}/{ecs-task-id} You can, however, bake dynamic values (like the flow run ID) into the stream prefix via templating, so the first path segment includes your run info. Use this in CloudWatch Logs Options:
Copy code
{
  "awslogs-group": "deployment-outputs",
  "awslogs-region": "us-east-1",
  "awslogs-stream-prefix": "flow-run-{{ flow_run.id }}",
  "awslogs-create-group": "false"
}
What you’ll see in CloudWatch: - flow-run-<flow_run_id>/<container-name>/<ecs-task-id> Notes: - Replace region accordingly. - Templating with {{ flow_run.id }} works in the ECS work pool’s job template. In most setups, the “CloudWatch Logs Options” field also passes templates through; if it doesn’t in your UI build, drop the same values into the base job template’s containerDefinitions.logConfiguration.options instead. - You still can’t eliminate the container-name and task-id segments; that’s how ECS structures streams. - If you run multiple containers per task, you’ll get multiple streams per run (one per container). Refs: - Prefect AWS ECS guide - AWS: Using the awslogs driver with ECS
d
@Marvin I believe that this logging setting is being overridden by my deployment. How would I add the same logging settings to my prefect.yaml file?
m
thought for 37.2 seconds
Yep—your deployment can override pool defaults. Add the CloudWatch options directly to the deployment’s job variables in prefect.yaml so they’re applied even if the pool is overridden. Example prefect.yaml snippet:
Copy code
deployments:
  - name: my-flow
    entrypoint: flows/my_flow.py:flow
    work_pool:
      name: my-ecs-pool
      job_variables:
        cloudwatch_logs_options:
          awslogs-group: deployment-outputs
          awslogs-region: us-east-1
          awslogs-stream-prefix: "flow-run-{{ flow_run.id }}"
          awslogs-create-group: "false"
Notes: - Replace region accordingly. - The Jinja
{{ flow_run.id }}
gives you per-run prefixes; ECS still appends
/<container-name>/<ecs-task-id>
to form the final stream name. - If your deployment sets a custom
task_definition
(or
task_definition_arn
), Prefect won’t inject log settings—include the
logConfiguration
yourself in that task definition. - Ensure the task execution role has
logs:CreateLogStream
and
logs:PutLogEvents
.
d
@Marvin My
job_variables
look like this right now before adding cloudwatch, how would I add it in this case...
Copy code
job_variables: {
       image: "{{ build_image.image_name }}:{{ build_image.tag }}",
       # Can change for scaling...
       cpu: 4096,
       memory: 8192
    }
m
thought for 24.6 seconds
Add a
cloudwatch_logs_options
map to your
job_variables
. Inline style to match what you’ve got:
Copy code
job_variables: {
  image: "{{ build_image.image_name }}:{{ build_image.tag }}",
  cpu: 4096,
  memory: 8192,
  cloudwatch_logs_options: {
    awslogs-group: "deployment-outputs",
    awslogs-region: "us-east-1",
    awslogs-stream-prefix: "flow-run-{{ flow_run.id }}",
    awslogs-create-group: "false"
  }
}
Notes: - Replace
us-east-1
with your region. - Values are strings (that’s what the awslogs driver expects). - This yields streams like:
flow-run-<flow_run_id>/<container-name>/<ecs-task-id>
per run. - If your deployment uses a custom `task_definition`/`task_definition_arn`, these options won’t be injected—add a
logConfiguration
to that task definition instead. Refs: - Prefect AWS ECS guide
d
@Marvin I'm getting this error`Value error,
configure_cloudwatch_log
must be enabled to use `cloudwatch_logs_options``
m
thought for 21.2 seconds
You need to flip on the switch that tells the ECS worker to inject the log config. Add
configure_cloudwatch_logs: true
alongside your `cloudwatch_logs_options`:
Copy code
job_variables: {
  image: "{{ build_image.image_name }}:{{ build_image.tag }}",
  cpu: 4096,
  memory: 8192,
  configure_cloudwatch_logs: true,
  cloudwatch_logs_options: {
    awslogs-group: "deployment-outputs",
    awslogs-region: "us-east-1",
    awslogs-stream-prefix: "flow-run-{{ flow_run.id }}",
    awslogs-create-group: "false"
  }
}
Notes: - The key is
configure_cloudwatch_logs
(plural). That error message is… economical. - If your deployment supplies a custom `task_definition`/`task_definition_arn`, these options won’t be injected; you must put the
logConfiguration
directly in that task definition. - Ensure the task execution role has
logs:CreateLogStream
and
logs:PutLogEvents
. Ref: Prefect AWS ECS guide
d
@Marvin Instead of
{{ flow_run.id }}
I want to use my deployment name
m
thought for 25.3 seconds
Yes—use the deployment name in the stream prefix via Jinja. Example updating your existing job_variables:
Copy code
job_variables: {
  image: "{{ build_image.image_name }}:{{ build_image.tag }}",
  cpu: 4096,
  memory: 8192,
  configure_cloudwatch_logs: true,
  cloudwatch_logs_options: {
    awslogs-group: "deployment-outputs",
    awslogs-region: "us-east-1",
    awslogs-stream-prefix: "flow-{{ deployment.name | replace(' ', '-') | replace('/', '-') }}",
    awslogs-create-group: "false"
  }
}
What you’ll get per run: - flow-<deployment-name>/<container-name>/<ecs-task-id> Notes: - Replace region accordingly. - I replace spaces and slashes to keep the prefix tidy; optional, but it avoids odd path segments. - If you’re injecting your own task_definition(_arn), you must put the equivalent logConfiguration there instead. Ref: Prefect AWS ECS guide
d
@Marvin Still not showing up, is it an issue with the task definition that is used in the task run? How would I configure logs through that task definition?
m
thought for 61.3 seconds
Likely cause: your deployment is using a custom task definition (or
task_definition_arn
), so Prefect isn’t injecting the CloudWatch settings. In that case, you must put the log configuration directly into the ECS task definition. Here’s the container definition fragment you need in your task definition:
Copy code
"containerDefinitions": [
  {
    "name": "prefect",
    "image": "<http://123456789012.dkr.ecr.us-east-1.amazonaws.com/myimage:tag|123456789012.dkr.ecr.us-east-1.amazonaws.com/myimage:tag>",
    "essential": true,
    "logConfiguration": {
      "logDriver": "awslogs",
      "options": {
        "awslogs-group": "deployment-outputs",
        "awslogs-region": "us-east-1",
        "awslogs-stream-prefix": "flow-my-deployment",
        "awslogs-create-group": "false"
      }
    }
  }
]
And a fuller Fargate-style task definition example:
Copy code
{
  "family": "my-prefect-task",
  "networkMode": "awsvpc",
  "requiresCompatibilities": [
    "FARGATE"
  ],
  "cpu": "4096",
  "memory": "8192",
  "executionRoleArn": "arn:aws:iam::123456789012:role/ecsTaskExecutionRole",
  "taskRoleArn": "arn:aws:iam::123456789012:role/prefect-task-role",
  "containerDefinitions": [
    {
      "name": "prefect",
      "image": "<http://123456789012.dkr.ecr.us-east-1.amazonaws.com/myimage:tag|123456789012.dkr.ecr.us-east-1.amazonaws.com/myimage:tag>",
      "essential": true,
      "logConfiguration": {
        "logDriver": "awslogs",
        "options": {
          "awslogs-group": "deployment-outputs",
          "awslogs-region": "us-east-1",
          "awslogs-stream-prefix": "flow-my-deployment",
          "awslogs-create-group": "false"
        }
      }
    }
  ]
}
Key points: - ECS will create a new log stream per task automatically at: flow-my-deployment/<container-name>/<ecs-task-id> - Dynamic templating (e.g.,
{{ deployment.name }}
) will NOT work inside a static task definition ARN. If you need per-deployment or per-run prefixes, don’t use a fixed `task_definition_arn`; let Prefect register the task definition and set: -
configure_cloudwatch_logs: true
-
cloudwatch_logs_options.awslogs-stream-prefix: "flow-{{ deployment.name }}"
(or similar) - If you must keep a fixed ARN, choose a static prefix (e.g.,
flow-my-deployment
) and accept ECS’s container/task-id suffix. Sanity checks if logs still don’t show: - In Prefect UI, open the flow run -> Infrastructure tab. If it says it’s using a
task_definition_arn
, Prefect didn’t inject logs. - In the ECS console, open the running task -> Containers. If there’s no “View logs” link,
logConfiguration
is missing. - Ensure the task execution role has
logs:CreateLogStream
and
logs:PutLogEvents
(and
logs:CreateLogGroup
only if you use create-group true). - Region must match your cluster and log group region. - Container name used by Prefect should match the one you configured logs for (default is often `prefect`; if yours differs, set the worker/deployment
container_name
accordingly). Docs: - Prefect AWS ECS guide - AWS ECS: Using the awslogs log driver