Hey, I setup Prefect on fargate EKS on AWS. Every...
# prefect-community
Hey, I setup Prefect on fargate EKS on AWS. Everything has been running fine but now the completed pods are not deleted and are skyrocketing the AWS costs. I see a bunch of pods when i run kubectl get pods, (i deleted them manually for now). Any guidance on things I can look at to make sure that something is still not running and AWS is not going to keep racking up the bill?
I think the most frequent cause of this is unclosed connections like clients to a database or something like that. Does this behavior also persist with a simple flow? I think you can test that and then see is it still persists?
Not necessarily, once k8s jobs are completed, the pods are not deleted automatically (e.g. to view logs etc). It is up to the user to delete old jobs. Fortunately, there are tools around such as setting
to terminate runnings jobs that are stuck somehow. And for completed jobs, there is the TTL controller: https://kubernetes.io/docs/concepts/workloads/controllers/ttlafterfinished/
club42 1
upvote 1
Alternatively, you can schedule a simple cronJob to delete jobs older than a particular datestamp