<@ULVA73B9P> I am getting error: My prefect worker...
# marvin-ai
t
@Marvin I am getting error: My prefect workers give this error: Request attempt #7/9 failed; will retry: GET https://kubernetes.default.svc/api/v1/namespaces -> APIForbiddenError('namespaces is forbidden: User "systemserviceaccountprefect3:low-resource-worker" cannot list resource "namespaces" in API group "" at the cluster scope', {'kind': 'Status', 'apiVersion': 'v1', 'metadata': {}, 'status': 'Failure', 'message': 'namespaces is forbidden: User "systemserviceaccountprefect3:low-resource-worker" cannot list resource "namespaces" in API group "" at the cluster scope', 'reason': 'Forbidden', 'details': {'kind': 'namespaces'}, 'code': 403})
m
thought for 228.4 seconds
Thanks for the details — this is an RBAC/permissions issue. What’s happening - Prefect’s Kubernetes worker makes a cluster-scoped call to the Kubernetes API to list namespaces (GET /api/v1/namespaces). This is used during worker startup to gather cluster metadata (e.g., derive a cluster identifier and/or verify connectivity). - Your worker’s service account (
system:serviceaccount:prefect3:low-resource-worker
) doesn’t have permission to list namespaces at the cluster scope, so the call returns 403 Forbidden. Two ways to fix it 1) Grant the minimal cluster-scope permission to list namespaces - Add a ClusterRole with
list
(and typically
get
) on the
namespaces
resource. - Keep other permissions namespaced for least-privilege (jobs/pods/etc. in the worker’s namespace). Example RBAC (adjust names/namespace as needed):
Copy code
apiVersion: v1
kind: ServiceAccount
metadata:
  name: low-resource-worker
  namespace: prefect3
---
# Namespaced permissions for jobs/pods the worker manages
apiVersion: <http://rbac.authorization.k8s.io/v1|rbac.authorization.k8s.io/v1>
kind: Role
metadata:
  name: prefect-worker-ns-role
  namespace: prefect3
rules:
  - apiGroups: ["batch"]
    resources: ["jobs"]
    verbs: ["create", "get", "list", "watch", "update", "patch", "delete"]
  - apiGroups: [""]
    resources: ["pods"]
    verbs: ["get", "list", "watch", "delete"]
  - apiGroups: [""]
    resources: ["pods/log"]
    verbs: ["get"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "patch"]
---
apiVersion: <http://rbac.authorization.k8s.io/v1|rbac.authorization.k8s.io/v1>
kind: RoleBinding
metadata:
  name: prefect-worker-ns-rb
  namespace: prefect3
subjects:
  - kind: ServiceAccount
    name: low-resource-worker
    namespace: prefect3
roleRef:
  kind: Role
  name: prefect-worker-ns-role
  apiGroup: <http://rbac.authorization.k8s.io|rbac.authorization.k8s.io>
---
# Cluster-scope permission to read/list namespaces
apiVersion: <http://rbac.authorization.k8s.io/v1|rbac.authorization.k8s.io/v1>
kind: ClusterRole
metadata:
  name: prefect-worker-cluster-role
rules:
  - apiGroups: [""]
    resources: ["namespaces"]
    verbs: ["get", "list"]
---
apiVersion: <http://rbac.authorization.k8s.io/v1|rbac.authorization.k8s.io/v1>
kind: ClusterRoleBinding
metadata:
  name: prefect-worker-cluster-rb
subjects:
  - kind: ServiceAccount
    name: low-resource-worker
    namespace: prefect3
roleRef:
  kind: ClusterRole
  name: prefect-worker-cluster-role
  apiGroup: <http://rbac.authorization.k8s.io|rbac.authorization.k8s.io>
2) If you cannot grant cluster-wide list on namespaces - In current Prefect Kubernetes worker behavior, this request is part of startup/metadata gathering. There isn’t a supported config to disable just that call. If granting
list namespaces
is not possible, please share your Prefect and prefect-kubernetes versions; in some older versions, the worker only did a single read of a specific namespace (e.g., kube-system). We can check if pinning/upgrade avoids the cluster-wide list for your use case. Helpful references - Prefect 3 Kubernetes worker docs: Deploy a worker using Helm - Prefect Kubernetes integration: prefect-kubernetes integration - Background (Prefect 2.x context, similar permission needed to read namespaces): Regression with Kubernetes (403 Forbidden) after #7747 (Use cluster uid) Can you share: - Your Prefect version and prefect-kubernetes version - How you deployed the worker (Helm vs. custom manifests) - The namespace you intend the worker to operate in With that, I can confirm the exact minimal RBAC for your version or suggest an upgrade path if there’s a change in behavior.
t
@Marvin I already use Cluster Config, should that not handle the access?
m
thought for 61.8 seconds
Short answer: no — a Cluster Config tells the worker how to connect/authenticate to the cluster, but it does not grant any RBAC permissions by itself. Details - If your worker is running in-cluster (which it is, given the error shows the service account
prefect3:low-resource-worker
), the Kubernetes Python client will use the pod’s service account token. That service account must have the RBAC needed for what the worker does. - A
KubernetesClusterConfig
or
KubernetesCredentials
block can supply different credentials (e.g., a kubeconfig or token for a user with broader permissions). That can “work” if those creds already have the needed rights, but it doesn’t change or elevate the service account’s RBAC. In-cluster, unless you explicitly point the worker to external creds, you’re using the pod’s service account. What you need to do - Keep your namespaced permissions for jobs/pods/logs/events in the worker’s namespace. - Add a small ClusterRole to allow `get`/`list` on
namespaces
and bind it to the worker’s service account. That will resolve the
GET /api/v1/namespaces
403 you’re seeing. Example (replace names/namespaces as needed):
Copy code
apiVersion: <http://rbac.authorization.k8s.io/v1|rbac.authorization.k8s.io/v1>
kind: ClusterRole
metadata:
  name: prefect-worker-cluster-role
rules:
  - apiGroups: [""]
    resources: ["namespaces"]
    verbs: ["get", "list"]
---
apiVersion: <http://rbac.authorization.k8s.io/v1|rbac.authorization.k8s.io/v1>
kind: ClusterRoleBinding
metadata:
  name: prefect-worker-cluster-rb
subjects:
  - kind: ServiceAccount
    name: low-resource-worker
    namespace: prefect3
roleRef:
  kind: ClusterRole
  name: prefect-worker-cluster-role
  apiGroup: <http://rbac.authorization.k8s.io|rbac.authorization.k8s.io>
If you prefer not to grant cluster-scope list - You’d need to run the worker with credentials (via your Cluster Config) that already have the required permissions, or upgrade to a version/setting that does not perform that cluster-scope call. At present, the Kubernetes worker performs this cluster metadata call; there isn’t a supported toggle to skip it. References - Deploying a worker on Kubernetes: Kubernetes worker via Helm - Prefect Kubernetes integration: prefect-kubernetes integration If you can share: - How the worker is deployed (Helm vs. manifests) - Your Prefect and prefect-kubernetes versions …I can tailor the exact RBAC snippet or Helm values for your setup.
t
@Marvin are you even sure I need to create the SA? It is not part of any guide I can find.
m
thought for 57.2 seconds
Totally fair question — you don’t have to create a new ServiceAccount just because you’re using Prefect. Key point: some ServiceAccount will always be used - If you don’t specify one, your worker pod uses the namespace’s
default
ServiceAccount. - Your error shows the pod is using
system:serviceaccount:prefect3:low-resource-worker
. - Regardless of which SA you use (default or custom), that SA must have the RBAC needed by the worker. Cluster Config/Kubeconfig tells the worker how to connect, but it does not grant permissions to the SA your pod runs under. Why you’re seeing 403 - The Kubernetes worker does a cluster-scoped GET /api/v1/namespaces during startup/metadata checks. - If the SA the pod is using doesn’t have
get,list
on
namespaces
, you’ll get the 403 you posted. You have two options 1) Keep using your existing SA (or the default SA) and add a small cluster-scope permission: - Bind that SA to a ClusterRole that allows
get,list
on
namespaces
(everything else can stay namespaced: jobs/pods/pods/log/events). - Minimal YAML to add (adjust names/namespaces as needed):
Copy code
apiVersion: <http://rbac.authorization.k8s.io/v1|rbac.authorization.k8s.io/v1>
kind: ClusterRole
metadata:
  name: prefect-worker-namespaces
rules:
  - apiGroups: [""]
    resources: ["namespaces"]
    verbs: ["get", "list"]
---
apiVersion: <http://rbac.authorization.k8s.io/v1|rbac.authorization.k8s.io/v1>
kind: ClusterRoleBinding
metadata:
  name: prefect-worker-namespaces-rb
subjects:
  - kind: ServiceAccount
    name: low-resource-worker
    namespace: prefect3
roleRef:
  kind: ClusterRole
  name: prefect-worker-namespaces
  apiGroup: <http://rbac.authorization.k8s.io|rbac.authorization.k8s.io>
2) Don’t grant cluster-scope perms and instead run the worker with external credentials that already have them - If you point
KubernetesClusterConfig
to a kubeconfig or token with the needed rights, the worker will use that identity for API calls. In your logs, though, the identity is a pod SA, so you’re currently using in-cluster auth. How to see which SA your worker pod is using
Copy code
kubectl -n prefect3 get pod <your-worker-pod> -o jsonpath='{.spec.serviceAccountName}'
Docs for context - Deploying a worker on K8s: Kubernetes worker via Helm - Prefect Kubernetes integration: prefect-kubernetes - Background (2.x but same idea about namespace permissions): Regression with Kubernetes (403 Forbidden) after #7747 If you tell me: - How you deployed the worker (Helm vs. manifests) - Your Prefect and prefect-kubernetes versions …I can tailor the exact RBAC or Helm values so you don’t over-provision.