<@ULVA73B9P> can you show me an example of a deplo...
# ask-marvin
y
@Marvin can you show me an example of a deployments.yaml file for an aks worker that uses
job_variables
to override the cpu and memory request
m
Ah, the sweet smell of YAML in the air. Here's an example of a
deployments.yaml
file for an AKS worker that uses
job_variables
to override the CPU and memory request:
Copy code
yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-aks-worker
spec:
  replicas: 1
  selector:
    matchLabels:
      app: my-aks-worker
  template:
    metadata:
      labels:
        app: my-aks-worker
    spec:
      containers:
      - name: my-aks-worker
        image: prefecthq/prefect:latest
        env:
        - name: PREFECT__CLOUD__AGENT__LABELS
          value: "aks"
        - name: PREFECT__CLOUD__AGENT__TOKEN
          valueFrom:
            secretKeyRef:
              name: prefect-api-token
              key: token
        resources:
          requests:
            memory: "{{ job_variables.memory_request }}Mi"
            cpu: "{{ job_variables.cpu_request }}m"
In this example, the
job_variables.memory_request
and
job_variables.cpu_request
are used to set the memory and CPU requests for the AKS worker container. Make sure to replace the
prefecthq/prefect:latest
image with the appropriate Prefect image for your use case. Remember to replace the placeholders
{{ job_variables.memory_request }}
and
{{ job_variables.cpu_request }}
with the actual values you want to use when deploying your AKS worker. I hope this helps you in your quest for YAML perfection. Good luck, human!
y
@Marvin, what if I want my specific deployment to have different infrastructure resources than the default worker. My deployment.yaml currently looks like this:
Copy code
deployments:
  - name: aks-test-sp
    entrypoint: .\workflow\flow_demo.py:aks_demo_flow
    work_pool:
        name: aks-worker
    job_variables:
      cpu_request: "4"
      cpu_limit: "8"
m
Ah, a twist in the tale of YAML. To customize the infrastructure resources for your specific deployment, you can modify the
deployments.yaml
file like this:
Copy code
yaml
deployments:
  - name: aks-test-sp
    entrypoint: .\workflow\flow_demo.py:aks_demo_flow
    work_pool:
      name: aks-worker
    job_variables:
      cpu_request: "4"
      cpu_limit: "8"
      memory_request: "2Gi"
      memory_limit: "4Gi"
Now, let's update the AKS worker deployment to use these `job_variables`:
Copy code
yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-aks-worker
spec:
  replicas: 1
  selector:
    matchLabels:
      app: my-aks-worker
  template:
    metadata:
      labels:
        app: my-aks-worker
    spec:
      containers:
      - name: my-aks-worker
        image: prefecthq/prefect:latest
        env:
        - name: PREFECT__CLOUD__AGENT__LABELS
          value: "aks"
        - name: PREFECT__CLOUD__AGENT__TOKEN
          valueFrom:
            secretKeyRef:
              name: prefect-api-token
              key: token
        resources:
          requests:
            memory: "{{ job_variables.memory_request }}"
            cpu: "{{ job_variables.cpu_request }}"
          limits:
            memory: "{{ job_variables.memory_limit }}"
            cpu: "{{ job_variables.cpu_limit }}"
In this example, the
job_variables
from your
deployments.yaml
file are used to set the memory and CPU requests and limits for the AKS worker container. Make sure to replace the
prefecthq/prefect:latest
image with the appropriate Prefect image for your use case. Keep in mind that you'll need to replace the placeholders
{{ job_variables.memory_request }}
,
{{ job_variables.memory_limit }}
,
{{ job_variables.cpu_request }}
, and
{{ job_variables.cpu_limit }}
with the actual values from your
deployments.yaml
file when deploying your AKS worker. I hope this clears up the YAML fog. Good luck with your deployment, human!
y
@Christopher Boyd this seems to suggest I need to also configure my workpool, but then would it still take defaults or always expect an input value for those?
@Marvin is there a way I can have my deployment run in a different nodepool?
m
Certainly, my human friend. To run your deployment in a different node pool, you can use Kubernetes node selectors. Node selectors allow you to specify which nodes your deployment should run on based on their labels. First, make sure your desired node pool has a unique label. For example, if you have a node pool named
custom-nodepool
, you can label its nodes with
nodepool: custom-nodepool
. Next, update your AKS worker deployment to include the
nodeSelector
field:
Copy code
yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-aks-worker
spec:
  replicas: 1
  selector:
    matchLabels:
      app: my-aks-worker
  template:
    metadata:
      labels:
        app: my-aks-worker
    spec:
      containers:
      - name: my-aks-worker
        image: prefecthq/prefect:latest
        env:
        - name: PREFECT__CLOUD__AGENT__LABELS
          value: "aks"
        - name: PREFECT__CLOUD__AGENT__TOKEN
          valueFrom:
            secretKeyRef:
              name: prefect-api-token
              key: token
        resources:
          requests:
            memory: "{{ job_variables.memory_request }}"
            cpu: "{{ job_variables.cpu_request }}"
          limits:
            memory: "{{ job_variables.memory_limit }}"
            cpu: "{{ job_variables.cpu_limit }}"
      nodeSelector:
        nodepool: custom-nodepool
In this example, the
nodeSelector
field is set to match nodes with the label
nodepool: custom-nodepool
. This will ensure that your deployment runs on nodes within the
custom-nodepool
node pool. Remember to adjust the other parts of the YAML file according to your specific requirements, such as the Prefect image and resource requests and limits. And there you have it, another YAML mystery solved. Good luck with your node pool adventures, human!
y
is there a way I can pass this through as an infrastructure override in my deployment.yaml that looks like this:
Copy code
deployments:
  - name: aks-test-sp
    entrypoint: .\workflow\flow_demo.py:aks_demo_flow
    work_pool:
      name: aks-worker
      job_variables:
        cpu_request: "4"
        cpu_limit: "8"
        memory_request: "2Gi"
        memory_limit: "4Gi"
c
You can , it just passes through to the template for the job
Node selectors and taints are just key-values
The scheduler looks at the keys and values on the job and then decides which node is best
I have an example job template with them set , but again I haven’t applied this configuration just yet to the deployment model you’re using at the moment
I’d need a little bit of time to see the syntax
You can think of it like just like a sticker on the deployment that says “go to node 1!, I’m a small job “ and another sticker that says “go to node 2 I’m a big job !”
Where that sticker is literally just like a nodeSelector: prefect: big-job
The key value themselves are very arbitrary and could be anything you want
I’ve done prefect: Normal and prefect: high_memory - best practice usually says the app or tier , and some value you decide
y
ah perfect yeah that sounds great
sounds like I don't have to worry about creating any other workers/work-pools, now its all just infra overrides for anything else I want to do
c
For the most part - the infra override only sets up your kubernetes job the way you want it
If Kubernetes doesn’t have enough resources to schedule it though , it won’t (unless you have auto scaling )
1