Kubernetes supports fine-grained access control, so you can decide who has permission to work with resources in your cluster, and what they can do with them.
There are two parts to RBAC, decoupling permissions and who has the permissions - that lets you model security with a managable number of objects:
Roles and RoleBindings apply to objects in a specific namespace; there are also ClusterRole and ClusterRoleBindings which have a similar API and secure access to objects across all namespaces.
There’s a bug in the default RBAC setup in older versions of Docker Desktop, which means permissions are not applied correctly. If you’re using Kubernetes in Docker Desktop v4.2 or earlier, run this to fix the bug:
# on Docker Desktop for Mac (or WSL2 on Windows):
sudo chmod +x ./scripts/fix-rbac-docker-desktop.sh
./scripts/fix-rbac-docker-desktop.sh
# OR on Docker Desktop for Windows (PowerShell):
Set-ExecutionPolicy -ExecutionPolicy Unrestricted -Scope Process -Force
./scripts/fix-rbac-docker-desktop.ps1
Docker Desktop 4.3.0 fixes the issue,so if you run the command and you see Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io “docker-for-desktop-binding” not found - that means your version doesn’t have the bug and you’re good to go. ___
Authentication for end-user access is manged outside of Kubernetes, so we’ll use RBAC for internal access to the cluster - apps running in Kubernetes.
We’ll use a simple web app which connects to the Kubernetes API server to get a list of Pods, it displays them and lets you delete them.
Create a sleep Deployment so we’ll have a Pod to see in the app:
kubectl apply -f labs/rbac/specs/sleep.yaml
The initial spec for the web app doesn’t include any RBAC rules, but it does include a specific security account for the Pod:
📋 Deploy the resources in labs/rbac/specs/kube-explorer
.
Browse to the app at localhost:8010 or localhost:30010
You’ll see an error. The app is trying to connect to the Kubernetes REST API to get a list of Pods, but it’s getting a 403 Forbidden error message.
Kubernetes automatically populates an authentication token in the Pod, which the app uses to connect to the API server:
📋 Print all the details about the kube-explorer Pod.
You’ll see a volume mounted at
/var/run/secrets/kubernetes.io/serviceaccount
- that’s not in the Pod spec, it’s a Kubernetes default to add it
kubectl exec deploy/kube-explorer -- cat /var/run/secrets/kubernetes.io/serviceaccount/token
That’s the authentication token for the Service Account, so Kubernetes knows the identity of the API user
So the app is authenticated and it’s allowed to use the API, but the account is not authorized to list Pods. Security principals - ServiceAccounts, Groups and Users - start off with no permissions and need to be granted acces to resources.
You can check the permissions of a user with the auth can-i
command:
kubectl auth can-i get pods -n default --as system:serviceaccount:default:kube-explorer
This command works for users and ServiceAccounts - the ServiceAccount ID includes the namespace and name
RBAC rules are applied when a request is made to the API server, so we can fix this app by deploying a Role and RoleBinding:
📋 Deploy the rules in labs/rbac/specs/kube-explorer/rbac-namespace
and verify the Service Account now has permission.
Now the app has the permissions it needs. Refresh the site and you’ll see a Pod list. You can delete the sleep Pod, then go back to the main page and you’ll see a replacement Pod created by the ReplicaSet.
The role binding restricts access to the default namespace, the same ServiceAccount can’t see Pods in the system namespace:
📋 Check if the kube-explorer
account can get Pods in the kube-system
namespace.
You can grant access to Pods in each namespace with more Roles and RoleBindings, but if you want permissions to apply across all namespaces you can use a ClusterRole and ClusterRoleBinding:
📋 Deploy the cluster rules in labs/rbac/specs/kube-explorer/rbac-cluster
and verify the SA can get Pods in the system namespace, but it can’t delete them.
Browse to the app with a namespace in the querystring, e.g. http://localhost:8010/?ns=kube-system or http://localhost:30010/?ns=kube-system
The app can see Pods in other namespaces now.
RBAC permissions are finely controlled. The app only has access to Pod resources - if you click the Service Accounts link the app shows the 403 Forbidden error again.
You need to be familiar with RBAC. You’ll certainly have restricted permissions in production clusters, and if you need new access you’ll get it more quickly if you give the admin a Role and RoleBinding for what you need.
Get some practice by deploying new RBAC rules so the ServiceAccount view in the kube-explorer app works correctly, for objects in the default namespace.
Oh - one more thing :) Mounting the ServiceAccount token in the Pod is default behaviour but most app don’t use the Kubernetes API server. It’s a potential security issue so can you amend the sleep Pod so it doesn’t have a token mounted.
kubectl delete pod,deploy,svc,serviceaccount,role,rolebinding,clusterrole,clusterrolebinding -A -l kubernetes.courselabs.co=rbac