Also how much memory and CPU will I need to run a ...
# prefect-community
Also how much memory and CPU will I need to run a prefect agent, given that I’ll be running kubernetes jobs (as separate pods)?
cpu won’t be much but I have an agent running my cluster right now that takes 80Mb
Cool, have you got an example .yml file I can use without the ENV variables please?
(I’m using GKE auto-pilot)
well… I’m doing some undocumented things… I run the agent and code together as part of a k8s deployment so I’m hacking the prefect deployment.yaml (not k8s deployment yaml) to have a pod local directory and then just spawning the work in a sub process not a k8s job
@James Phoenix the agent process itself is very light, I don't believe you'll need to worry about its resource usage too much. however the memory / CPU usage of your flow runs spawned by the agent on your pods will depend on the nature of those flow runs. you can set default mem / CPU requests for pods in a namespace
Cool 🙂 @Nate can you please provide a default k8s agent .yml that I can test?
Or specify what is wrong with mine?
Hi James - couple things
1 would you mind clearing up your posts and condensing them to a single thread so we aren’t cluttering up the channel for everyone?
👍 1
2 - you can get / create a generic prefect deployment manifest using
prefect kubernetes manifest orion
👍 1
We have some resources to do this from the agent, as well as helm charts:
I’ve cleaned up my threads. Thanks @Christopher Boyd
🙌 2
Really good videos, thanks @Christopher Boyd. 2x questions please:
• Do I still need to have an orion API container if I’m pointing to prefect cloud? • How can I create a queue for my k8s agent? Or do I simply need to create a
queue and then do the same in ?
Yes, although it's lightweight , but it's executing and returning on behalf of the agent even to the cloud
For the second , you only need to do it one place
So if you are authenticated via cli, you can do prefect work-queue create
It will be pushed to cloud , and you can configure your agent to listen on that work queue afterwards
Alternatively, you can start it in the command itself used to instantiate the agent