Hi everyone! Have been having a lot of success wit...
# prefect-community
Hi everyone! Have been having a lot of success with migrating Prefect 1 to Prefect 2 on GKE. It has been a great experience!! I'm not sure this is the correct place for feedback but there are 2 situations that would be great to have a better view on: 1. When the underlying Kubernetes pod runs out of/over resource requirements the Pod gets killed by the GKE cluster but the job in Prefect Cloud remains in the state of
. Nothing is ever messaged back to the logs and it never
, some sort of response or indication that the pod is no longer active would be great! 2. When you run a job and it never makes it to the actual Flow, the failure comes through in the Prefect Cloud UI but the kubernetes Pod remains active. I'm assuming this is due to the
not getting applied yet, but it is set at the deployment level so not sure why it keeps the pod around on a failure like this.
āœ… 1
āž• 1
gratitude thank you 3
thanks so much! we are aware that such infrastructure management can be painful and we are working on a feature set that will make that more observable and easier to manage. thanks again and great to hear that migration is going well! šŸ™Œ
šŸ™Œ 3
Amazing to hear and thank you for the response on the weekend! Look forward to what is to come in the future! One more I forgot that I'm sure has been reported: ā€¢ When cancelling the job in the UI the k8s pod doesn't get cleaned up (similar to 2 but slightly different)
šŸ‘ 1
gratitude thank you 1