Possibly related to <this> and <this> I have been ...
# ask-community
d
Possibly related to this and this I have been working with prefect rc9-10 for about 2 weeks now converting a pipeline I previous wrote in 2.0 to 3.0. I'm running the server locally as well as on an EC2 instance via docker and running an a few workers via the ECS work pool configuration. Prefect 3.0rc10 's server crashes and restarts the docker container both in the VM as well as on my local machine. I observed active deployment flows report that the prefect api url was not reachable retrying 1/3 , 2/3 then finally able to connect after the docker container has restarted. This will happen several times throughout the flow until finally the flow is not able to connect and the entire pipeline fails. Second to the server issue. I am experiencing the same thing with ECS workers. There appears to either be a memory leak on the workers or prefect service within the workers themselves are crashing as well similar to the server. The memory did reach about 70-80% on the ECS container before the prefect service died with an exit code -9 causing the entire flow to crash. However it is unclear to me why such memory is growing and not properly garbage collecting. Between the server intermittently dying and restarting from docker and the prefect worker terminating after running between 1-2 hours at high memory I am not sure if this is a configuration issue on my end or an issue specific to Prefect 3.
I'm specifically calling out prefect 3.0 in this post as the github issue primarily focuses on prefect 2.0. In my experience I am migrating this pipeline from 2.0 to 3.0 and I am having the opposite experience. I actually didn't have memory issues , or at least I didn't notice them this bad, until I started working with prefect 3.0. And I certainly didn't have the prefect server and workers crash intermittently while the flows were mid run.
a
Thanks for raising @datamongus, and kicking the tires on the release candidate. I’ll make sure this gets seen by our time. If you want something you can subscribe to updates on, I recommend making an issue. Either way super grateful for taking the time to write this up. 🙇
d
Sure thing I'll write this up and add a ticket. I'm still investigating a few things on my end so I'll try to provide more technical details in the ticket as well.
a
Thanks! Yeah you nailed it: the more reproducible path for our team to emulate can help us get to the bottom of things faster. 🤞
y
👀