<@ULVA73B9P> How to set resources for job? I am us...
# ask-marvin
n
@Marvin How to set resources for job? I am using kuberenetes work pool, and tried to set it using the prefect.yaml:
Copy code
work_pool:
      name: default-kubernetes-pool
      job_variables:
        resources:
          limits:
            cpu: 1000m
            memory: 1000Mi
          requests:
            cpu: 1000m
            memory: 1000Mi
but it seems to have no affection on the pod resources
m
It seems you're trying to configure resource limits for a Kubernetes work pool through the Prefect
prefect.yaml
file. According to the Prefect Work Pools documentation, if you need to customize job variables (like resource limits), you should refer to the Kubernetes Worker documentation to see how to override those settings. Furthermore, as per the Infrastructure document, you can use job variables like
env
,
image
, and
labels
to customize the Kubernetes Job, which also includes setting resources such as CPU and memory limits. To diagnose why the resource settings are not taking effect, you should: 1. Check the structure of your
prefect.yaml
file to ensure it conforms to the expected format. 2. Inspect the Kubernetes work pool configuration through the Prefect UI to verify if the resource limits are correctly recognized. 3. Review any customizations being applied to the base Job manifest as changes here might conflict with or override those set in the work pool section. Please refer to the work pools documentation for more details and the appropriate steps to adjust work pool configurations through the UI or Prefect CLI. If necessary, further investigation into the Kubernetes Worker documentation could be beneficial for deeper customization needs.
n
@Marvin Tried also with job_manifest with no success:
Copy code
{
  "job_manifest": {
    "spec": {
      "containers": {
        "resources": {
          "limits": {
            "cpu": "1000m",
            "memory": "1000Mi"
          },
          "requests": {
            "cpu": "1000m",
            "memory": "1000Mi"
          }
        }
      }
    }
  }
}
m
Based on the Prefect Deployment Guide for Kubernetes, to modify the default base job template and set CPU and memory requests and limits, you need to edit the JSON representation of the job manifest directly. Here's how you might adjust the CPU requests in the advanced settings by adding the respective configuration:
Copy code
json
{
  "job_manifest": {
    "spec": {
      "template": {
        "spec": {
          "containers": [
            {
              "name": "flow-container", // Replace with the actual name of your container
              "resources": {
                "limits": {
                  "cpu": "1000m",
                  "memory": "1000Mi"
                },
                "requests": {
                  "cpu": "1000m",
                  "memory": "1000Mi"
                }
              }
            }
          ]
        }
      }
    }
  }
}
Note that you need to ensure the following: - The container name in the manifest should match the actual container name used for your flow runs. - The
job_manifest
needs to be placed correctly within your
prefect.yaml
or provided at runtime. Based on the information from Kubernetes Infrastructure documentation, you should verify that
kubectl
is available and properly configured, and ensure that your storage settings are appropriate for Kubernetes execution. If these settings are incorrect, your work pool might not behave as expected. If you have followed these steps and your resources are still not being set as expected, you might want to inspect the work pool configuration through the Prefect UI or use the Prefect CLI to verify that your configuration is correct and has been successfully applied. If problems persist, you could review any open issues on GitHub to see if others have encountered similar problems or report a new issue.
n
@Marvin not working
m
It appears there is a relevant issue found on GitHub that might relate to your problem with Kubernetes work pool configurations not being applied as expected. Someone had a similar issue with labels not being applied to jobs on a Kubernetes work-pool, which could be analogous to your issue with resource limits not taking effect. You can check out the issue here: Labels not being applied to jobs on a Kubernetes work-pool. This might provide some context on whether this is a known bug or something specific to your configuration. Since this seems to be an ongoing or unresolved issue, I recommend you also check other GitHub issues or consider reporting a new issue detailing your specific problem. Make sure to provide clear reproducible steps, the expected vs. actual behavior, and any relevant configuration files or manifests. This will help the developers diagnose the problem and provide a resolution or workaround more effectively.