Sergio Luceno
10/15/2025, 6:40 PM@flow
def hello():
time.sleep(300)
print("hello world")
I am packaging my flow into a python:3.11-slim, with just this code and pip installing prefect dependency, trying to keep things as minimum as possible.
We have a deployment that we set to use our docker image.
---
When executing this in our cluster, Everytime we run a deployment we get a new pod executing the flow.
Every pod takes 150-200MB of RAM (without counting the prefect-server + prefect-worker pods)
If I need to run for example 10K concurrent jobs, I will have 10K pods x 200Mb RAM each... => 2000000Mb RAM. It's something we can not afford.
Are we doing something wrong? Can we do our planned workload in another why it's gonna use less resources? We are starting to think prefect it's not for our use case:
We just want to run a bunch of small jobs, and benefit from prefect concurrency management, but we can not afford every single task has a starting memory footprint of 150-200MBSergio Luceno
10/15/2025, 7:00 PMKevin Grismore
10/16/2025, 1:32 AMSergio Luceno
11/06/2025, 11:41 AM