<@ULVA73B9P> with a kubernetes worker, how will th...
# ask-marvin
k
@Marvin with a kubernetes worker, how will the worker choose which node pool to submit a pod to? Point me to source code in your response if possible.
m
It seems there are no direct hits specifically addressing how a Kubernetes worker chooses the node pool to submit a pod to in the Prefect documentation or GitHub issues. However, we can glean some useful information from the related documentation on Kubernetes workers and work pools. ### Key Points: 1. Declarative Infrastructure Syntax: Kubernetes workers use infrastructure as code (IaC) patterns to create the execution environment, often employing template variables to customize the Kubernetes manifest. 2. Work Pools: Configurations in the Prefect UI or via CLI commands can list and manage work pools, which are essential for controlling the environment where workers operate. You might need to inspect your Kubernetes worker's specific configuration for node selection logic. Look into whether you're using Prefect manifests that define node selectors or affinity rules. ### Documentation Links: - Developing a New Worker Type - Kubernetes Deployment Guide - Managing Work Pools For a deeper dive, you may need to look into the relevant Prefect worker configuration in your deployment, or check the source code if further customization is required.
k
by default, it doesn't have any knowledge of node pools I think? a fresh from the helm chart k8s worker only cares that it has permission so submit jobs to its resident namespace
k
Ahh so its K8s itself, not the worker
k
you can add taints and tolerations, node selectors, etc in the work pool config/deployment job variables
👀 1
k
may look into that ^ we're not seeing our GKE node pool autoscaling working as intended. worker attempts to submit a flow > gets told there's insufficient cpu resources >
but there are 3 node pools with adequate cpu available
maybe unrelated but all of a sudden today we're seeing flows stay in Running status endlessly because the job pod completes but the worker isn't notified too
h
@Kevin Grismore Is there any documentation / example on how to defined toleartions for jobs within the base job template? I am currently struggling to schedule jobs only on certain nodes that have a specific taint