Yaron Levi
07/14/2025, 1:20 PMReached configured timeout of 300s for ECS
This happened the many flows (all those that were scheduled to run around that time).Jake Kaplan
07/14/2025, 1:40 PMtask_start_timeout_seconds
on your ECS work pool. That controls when to crash the flow run if the ECS can't start in the duration.
If you take a look at that task directly in ECS you should hopefully see more information about why it couldn't provisionCormac
07/14/2025, 2:10 PMJake Kaplan
07/14/2025, 2:18 PMCormac
07/14/2025, 2:20 PMKevin Grismore
07/14/2025, 2:21 PMKevin Grismore
07/14/2025, 2:21 PMCormac
07/14/2025, 2:23 PMThe one I've seen most commonly is that if you're using fargate, it's possible for the availability zone you're running your tasks in has been hit by high compute demand and you're being forced to wait longer than usual to get resources allocatedFair. Could be that. However last instance of issue occurred over a weekend, which may suggest not that... (assuming lighter load over weekend) If it does happen again, any wise words on what AWS logging to monitor for hints?
Kevin Grismore
07/14/2025, 2:26 PMSTARTED
state. you should be able to find the ECS task intended for your flow run to happen in, and check out its state and logs. Sometimes you'll see logs about failed network requests or images that won't pull (ECS doesn't cache images)