Tristan Moeller
09/26/2025, 8:09 AMMarvin
09/26/2025, 8:09 AMMarvin
09/26/2025, 8:13 AMsystem:serviceaccount:prefect3:low-resource-worker
) doesn’t have permission to list namespaces at the cluster scope, so the call returns 403 Forbidden.
Two ways to fix it
1) Grant the minimal cluster-scope permission to list namespaces
- Add a ClusterRole with list
(and typically get
) on the namespaces
resource.
- Keep other permissions namespaced for least-privilege (jobs/pods/etc. in the worker’s namespace).
Example RBAC (adjust names/namespace as needed):
apiVersion: v1
kind: ServiceAccount
metadata:
name: low-resource-worker
namespace: prefect3
---
# Namespaced permissions for jobs/pods the worker manages
apiVersion: <http://rbac.authorization.k8s.io/v1|rbac.authorization.k8s.io/v1>
kind: Role
metadata:
name: prefect-worker-ns-role
namespace: prefect3
rules:
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create", "get", "list", "watch", "update", "patch", "delete"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "watch", "delete"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "patch"]
---
apiVersion: <http://rbac.authorization.k8s.io/v1|rbac.authorization.k8s.io/v1>
kind: RoleBinding
metadata:
name: prefect-worker-ns-rb
namespace: prefect3
subjects:
- kind: ServiceAccount
name: low-resource-worker
namespace: prefect3
roleRef:
kind: Role
name: prefect-worker-ns-role
apiGroup: <http://rbac.authorization.k8s.io|rbac.authorization.k8s.io>
---
# Cluster-scope permission to read/list namespaces
apiVersion: <http://rbac.authorization.k8s.io/v1|rbac.authorization.k8s.io/v1>
kind: ClusterRole
metadata:
name: prefect-worker-cluster-role
rules:
- apiGroups: [""]
resources: ["namespaces"]
verbs: ["get", "list"]
---
apiVersion: <http://rbac.authorization.k8s.io/v1|rbac.authorization.k8s.io/v1>
kind: ClusterRoleBinding
metadata:
name: prefect-worker-cluster-rb
subjects:
- kind: ServiceAccount
name: low-resource-worker
namespace: prefect3
roleRef:
kind: ClusterRole
name: prefect-worker-cluster-role
apiGroup: <http://rbac.authorization.k8s.io|rbac.authorization.k8s.io>
2) If you cannot grant cluster-wide list on namespaces
- In current Prefect Kubernetes worker behavior, this request is part of startup/metadata gathering. There isn’t a supported config to disable just that call. If granting list namespaces
is not possible, please share your Prefect and prefect-kubernetes versions; in some older versions, the worker only did a single read of a specific namespace (e.g., kube-system). We can check if pinning/upgrade avoids the cluster-wide list for your use case.
Helpful references
- Prefect 3 Kubernetes worker docs: Deploy a worker using Helm
- Prefect Kubernetes integration: prefect-kubernetes integration
- Background (Prefect 2.x context, similar permission needed to read namespaces): Regression with Kubernetes (403 Forbidden) after #7747 (Use cluster uid)
Can you share:
- Your Prefect version and prefect-kubernetes version
- How you deployed the worker (Helm vs. custom manifests)
- The namespace you intend the worker to operate in
With that, I can confirm the exact minimal RBAC for your version or suggest an upgrade path if there’s a change in behavior.Tristan Moeller
09/26/2025, 8:37 AMMarvin
09/26/2025, 8:37 AMMarvin
09/26/2025, 8:38 AMprefect3:low-resource-worker
), the Kubernetes Python client will use the pod’s service account token. That service account must have the RBAC needed for what the worker does.
- A KubernetesClusterConfig
or KubernetesCredentials
block can supply different credentials (e.g., a kubeconfig or token for a user with broader permissions). That can “work” if those creds already have the needed rights, but it doesn’t change or elevate the service account’s RBAC. In-cluster, unless you explicitly point the worker to external creds, you’re using the pod’s service account.
What you need to do
- Keep your namespaced permissions for jobs/pods/logs/events in the worker’s namespace.
- Add a small ClusterRole to allow `get`/`list` on namespaces
and bind it to the worker’s service account. That will resolve the GET /api/v1/namespaces
403 you’re seeing.
Example (replace names/namespaces as needed):
apiVersion: <http://rbac.authorization.k8s.io/v1|rbac.authorization.k8s.io/v1>
kind: ClusterRole
metadata:
name: prefect-worker-cluster-role
rules:
- apiGroups: [""]
resources: ["namespaces"]
verbs: ["get", "list"]
---
apiVersion: <http://rbac.authorization.k8s.io/v1|rbac.authorization.k8s.io/v1>
kind: ClusterRoleBinding
metadata:
name: prefect-worker-cluster-rb
subjects:
- kind: ServiceAccount
name: low-resource-worker
namespace: prefect3
roleRef:
kind: ClusterRole
name: prefect-worker-cluster-role
apiGroup: <http://rbac.authorization.k8s.io|rbac.authorization.k8s.io>
If you prefer not to grant cluster-scope list
- You’d need to run the worker with credentials (via your Cluster Config) that already have the required permissions, or upgrade to a version/setting that does not perform that cluster-scope call. At present, the Kubernetes worker performs this cluster metadata call; there isn’t a supported toggle to skip it.
References
- Deploying a worker on Kubernetes: Kubernetes worker via Helm
- Prefect Kubernetes integration: prefect-kubernetes integration
If you can share:
- How the worker is deployed (Helm vs. manifests)
- Your Prefect and prefect-kubernetes versions
…I can tailor the exact RBAC snippet or Helm values for your setup.Tristan Moeller
09/26/2025, 9:28 AMMarvin
09/26/2025, 9:29 AMMarvin
09/26/2025, 9:29 AMdefault
ServiceAccount.
- Your error shows the pod is using system:serviceaccount:prefect3:low-resource-worker
.
- Regardless of which SA you use (default or custom), that SA must have the RBAC needed by the worker. Cluster Config/Kubeconfig tells the worker how to connect, but it does not grant permissions to the SA your pod runs under.
Why you’re seeing 403
- The Kubernetes worker does a cluster-scoped GET /api/v1/namespaces during startup/metadata checks.
- If the SA the pod is using doesn’t have get,list
on namespaces
, you’ll get the 403 you posted.
You have two options
1) Keep using your existing SA (or the default SA) and add a small cluster-scope permission:
- Bind that SA to a ClusterRole that allows get,list
on namespaces
(everything else can stay namespaced: jobs/pods/pods/log/events).
- Minimal YAML to add (adjust names/namespaces as needed):
apiVersion: <http://rbac.authorization.k8s.io/v1|rbac.authorization.k8s.io/v1>
kind: ClusterRole
metadata:
name: prefect-worker-namespaces
rules:
- apiGroups: [""]
resources: ["namespaces"]
verbs: ["get", "list"]
---
apiVersion: <http://rbac.authorization.k8s.io/v1|rbac.authorization.k8s.io/v1>
kind: ClusterRoleBinding
metadata:
name: prefect-worker-namespaces-rb
subjects:
- kind: ServiceAccount
name: low-resource-worker
namespace: prefect3
roleRef:
kind: ClusterRole
name: prefect-worker-namespaces
apiGroup: <http://rbac.authorization.k8s.io|rbac.authorization.k8s.io>
2) Don’t grant cluster-scope perms and instead run the worker with external credentials that already have them
- If you point KubernetesClusterConfig
to a kubeconfig or token with the needed rights, the worker will use that identity for API calls. In your logs, though, the identity is a pod SA, so you’re currently using in-cluster auth.
How to see which SA your worker pod is using
kubectl -n prefect3 get pod <your-worker-pod> -o jsonpath='{.spec.serviceAccountName}'
Docs for context
- Deploying a worker on K8s: Kubernetes worker via Helm
- Prefect Kubernetes integration: prefect-kubernetes
- Background (2.x but same idea about namespace permissions): Regression with Kubernetes (403 Forbidden) after #7747
If you tell me:
- How you deployed the worker (Helm vs. manifests)
- Your Prefect and prefect-kubernetes versions
…I can tailor the exact RBAC or Helm values so you don’t over-provision.