I have a question about workers and backlog runs -
I setup a worker and was testing with my laptop, I setup the deployment to use the worker and run every 5 minutes. When I booted up my machine after the weekend and started the worker it had hundreds of late/scheduled flow deployments and it almost crashed my machine trying to deploy/run them all.
I'm not actually using my laptop as a server it's just a test for another machine that will have some occasional downtime, but I need the process/memory/cpu footprint to be extremely low on the machine.
Is there a way for the worker to handle the behavior of missing scheduled runs and just skipping them, the logic for catchup is already baked into the flow itself. I saw there's a
--limit
option for workers but that seems like it would prevent doing too many, but it wouldn't ignore the late scheduled runs.
Bring your towel and join one of the fastest growing data communities. Welcome to our second-generation open source orchestration platform, a completely rethought approach to dataflow automation.