Bruno Nirello
12/21/2023, 9:07 PMMarvin
12/21/2023, 9:08 PMjob_variables
in the prefect.yaml
configuration file for Prefect 2.0, you'll need to be aware of how the configuration system works with respect to deployment and execution environments (as applicable).
However, based on the search conducted through the documentation, I did not find a direct excerpt that specifically addresses configuring resource limits and requests in the prefect.yaml
file. Typically, in Kubernetes, you would set resource limits and requests within the Kubernetes execution environment configuration to control CPU and memory allocation. I would expect a similar process in Prefect 2.x where you can specify these details within the execution environment or the job's configuration.
For job_variables
, based on the documentation, you can override environment variables specified in a work pool through the job_variables
parameter. For example, when deploying a flow with Prefect CLI, you could use the job_variables
to specify environment variables like below:
python
if __name__ == "__main__":
get_repo_info.deploy(
name="my-deployment-never-pull",
work_pool_name="my-docker-pool",
job_variables={"env": {"EXTRA_PIP_PACKAGES": "boto3"}},
image="my-image:my-tag",
push=False
)
In this example, the environment variable EXTRA_PIP_PACKAGES
denotes a special variable that Prefect uses to install additional Python packages at runtime.
As for configuring these settings within prefect.yaml
, you would typically define your deployment and runtime settings following the Prefect's configuration conventions. The prefect.yaml
file should have sections corresponding to different Prefect components like flows
, deployments
, logging
, etc., and you can add custom configuration there.
Since the specific details about configuring resource requests and limits are not directly available in the excerpt returned, it might require delving deeper into Prefect's documentation or reaching out to their community or support channels for more precise information on this topic. If you want to get more accurate information on configuring job_variables
and resource constraints in the prefect.yaml
file, you might need to look for example configuration files or seek assistance from Prefect's documentation and support.Jack P
12/21/2023, 9:21 PMname: ...
prefect-version: 2.14.9
definitions:
job_variables:
cpu1_memory1024: &cpu1_memory1024
region: us-central1
cpu: 1000m
memory: 1024Mi
env: ...
deployments:
- name: ...
description: ...
tags: ["main", "{{ $IMAGE_TAG }}"]
entrypoint: ...
work_pool:
name: ...
work_queue_name: default
job_variables: *cpu1_memory1024
Bruno Nirello
12/21/2023, 9:36 PMBruno Nirello
12/21/2023, 10:12 PMJack P
12/21/2023, 10:20 PMBruno Nirello
12/21/2023, 10:31 PMBruno Nirello
12/22/2023, 12:39 PMNate
12/22/2023, 2:30 PMNate
12/22/2023, 2:30 PMjob_variables
keyNate
12/22/2023, 2:32 PMresources
-> containers
-> ... -> job_manifest
so you can mix and match them when overriding the base job template for each deployment (instead of creating 1 entire new work pool for each unique set of resources
values)Bruno Nirello
12/22/2023, 3:27 PM"job_configuration": {
"env": "{{ env }}",
"name": "{{ name }}",
"labels": "{{ labels }}",
"command": "{{ command }}",
"namespace": "{{ namespace }}",
"job_manifest": {
"kind": "Job",
"spec": {
"template": {
"spec": {
"containers": [
{
"env": [
{
"name": "VCD_CREDENTIAL",
"valueFrom": {
"secretKeyRef": {
"key": "vcdpre",
"name": "vcdpre"
}
}
},
{
"name": "S3_CREDENTIAL",
"valueFrom": {
"secretKeyRef": {
"key": "aws-secret",
"name": "aws-secret"
}
}
}
],
"args": "{{ command }}",
"name": "prefect-job",
"image": "{{ image }}",
"imagePullPolicy": "{{ image_pull_policy }}",
"resources": {
"limits": {
"cpu": "500m",
"memory": "512Mi"
},
"requests": {
"cpu": "200m",
"memory": "256Mi"
}
}
}
],
"completions": 1,
"parallelism": 1,
"restartPolicy": "Never",
"serviceAccountName": "{{ service_account_name }}"
}
},
"backoffLimit": 0,
"ttlSecondsAfterFinished": "{{ finished_job_ttl }}"
},
"metadata": {
"labels": "{{ labels }}",
"namespace": "{{ namespace }}",
"generateName": "{{ name }}-"
},
"apiVersion": "batch/v1"
},
"stream_output": "{{ stream_output }}",
"cluster_config": "{{ cluster_config }}",
"job_watch_timeout_seconds": "{{ job_watch_timeout_seconds }}",
"pod_watch_timeout_seconds": "{{ pod_watch_timeout_seconds }}"
}
combining with prefect.yaml
definitions:
work_pools:
my_docker_work_pool: &my_docker_work_pool
name: k8s_pod
work_queue_name: default
job_variables:
env:
CREDENTIALZ: 'HI'
resources:
limits:
cpu: 500m
memory: 512Mi
requests:
cpu: 300m
memory: 312Mi
# the deployments section allows you to provide configuration for deploying flows
deployments:
- name: cloud_director_github
version:
tags: []
description:
entrypoint: flows/cloud_director_mpc/flow.py:kube_flow
parameters: {}
work_pool: *my_docker_work_pool
schedule:
Nate
12/22/2023, 3:32 PMresources
as a top level key of job_variables
, but it actually is nested like:
job_variables
:
⢠job_manifest
⦠spec
and so on
so when we try to merge your deployment level overrides with the existing work pool, we likely discard / fail to merge if we find a key at an unexpected level
i'd probably recommend making a spec
yaml definition so you can do something like
definitions:
work_pools:
my_docker_work_pool: &my_docker_work_pool
name: k8s_pod
work_queue_name: default
job_variables:
env:
CREDENTIALZ: 'HI'
job_manifest:
kind: Job
spec: *my_spec_w_resource_reqs
Bruno Nirello
12/22/2023, 4:05 PMBruno Nirello
12/22/2023, 4:06 PMprefect.yaml
Bruno Nirello
12/22/2023, 4:07 PMNate
12/22/2023, 8:01 PM