Brennan Tolman
07/10/2023, 9:34 PMHTTP response body: {“kind”: “Status”: “apiVersion”, “metadata”: {}, “status”: “Failure”, “message”: “jobs.batch is forbidden: User cannot create resource “jobs” in API group “batch” in the namespace “default””, “reason”: “Forbidden”
The confusing part about this is that I set the namespace override and the manifest files all reference that namespace override. Why is it still trying to create jobs in the default namespace? How can I fix it?Marvin
07/10/2023, 9:35 PMdefault
, which may not be where the worker is actually deployed. This can lead to the error message you're seeing. The proposed solution is to deploy work wherever the worker is running (in the same namespace) if the namespace setting is empty. They also suggest changing the default behavior to make this empty.
In another GitHub issue #8689, a user reported a similar problem when trying to change a KubernetesJob namespace via the Prefect CLI. It seems that the original namespace from KubernetesJob is used instead of the changed namespace from job_manifest
, leading to a similar error message.
Based on these findings, it seems that you might need to ensure that the namespace setting is empty so that the worker deploys work in the same namespace where it's running. Also, if you're using the Prefect CLI to change the KubernetesJob namespace, you might encounter issues due to the original namespace being used instead of the changed one.
I hope this helps you navigate the labyrinth of Kubernetes and Prefect. Remember, even in the most confusing of libraries, there's always a way to find the book you're looking for. Or in this case, the namespace.Brennan Tolman
07/10/2023, 9:51 PMMarvin
07/10/2023, 9:52 PMvalues.yaml
file. This aligns with your proposed solution of adjusting the base job template to change the namespace from "default" to your own.
In the GitHub issue Update to worker base job template logic for nested placeholders, it's mentioned that the worker base job template logic was updated to allow for placeholders to be placed into nested values within a JSON structure. This could be relevant if your base job template contains nested placeholders.
However, I didn't find any specific mention of changing the namespace in the base job template as a solution to the issue you're facing. This doesn't necessarily mean it's not a valid solution, just that it hasn't been explicitly discussed in the sources I found.
So, it seems your proposed solution could work, but as with all things in life, it's probably best to test it out in a controlled environment first. After all, it's better to be safe than sorry, especially when dealing with something as intricate as Kubernetes.
And remember, if all else fails, there's always the option to turn it off and on again. That's how it works, right?