In this lab, we will learn Kubernetes security features.
Chapter Details | |
---|---|
Chapter Goal | Understand Kubernetes Security |
Chapter Sections |
Kubernetes has two categories of users: ServiceAccount and ordinary users. While ServiceAccount is a Kubernetes resource, other users are externally managed and have no persistent representation in the cluster (no resource object). Ordinary users can be authenticated using different strategies, such as HTTP basic auth, or client certificates. In this lab we will create a user using client certificate and authorize her on a specific namespace.
As the cluster administrator create a namespace and authorize group developers to use it.
Step 1 Create namespace development:
$ kubectl create namespace development
namespace "development" created
Step 2 Create file developer-role.yaml:
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: developer
rules:
- apiGroups: ["", "batch", "autoscaling", "extensions", "apps"]
resources:
- "statefulsets"
- "horizontalpodautoscalers"
- "jobs"
- "replicationcontrollers"
- "services"
- "deployments"
- "replicasets"
- "pods"
verbs: ["*"]
Create role developer in namespace development:
$ kubectl apply -f developer-role.yaml
clusterrole "developer" created
Step 3 Create the file developer-role-binding.yaml:
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: developer-binding
namespace: development
subjects:
- kind: Group
name: developers
apiGroup: ""
roleRef:
kind: ClusterRole
name: developer
apiGroup: ""
Authorize/bind group developers with role developer on namespace development:
$ kubectl apply -f developer-role-binding.yaml
rolebinding "developer-binding" created
As the cluster administrator create a user in the developers group.
Step 1 Retrieve the cluster Certificate Authority (CA) certificate. We can copy it from the master node, or extract it from .kube/config kubectl configuration file:
$ mkdir -p ~/users/alice && cd ~/users/alice
$ awk '/certificate-authority-data:/{print $2}' ~/.kube/config | base64 -d > ca.crt
Test the validity of the extracted CA certificate:
$ openssl x509 -in ca.crt -text
Step 2 Generate a X509 Certificate Signing Request (CSR) for user alice:
$ openssl genrsa -out alice.key 2048
$ openssl req -new -key alice.key -out alice.csr -subj "/CN=alice/O=mirantis/O=developers"
Step 3 Request the CSR to be signed using the Kubernetes certificate API:
$ cat <<EOF | kubectl create -f -
apiVersion: certificates.k8s.io/v1beta1
kind: CertificateSigningRequest
metadata:
name: alice_csr
spec:
groups:
- system:authenticated
request: $(cat alice.csr | base64 | tr -d '\n')
usages:
- digital signature
- key encipherment
- client auth
EOF
Output should be:
certificatesigningrequest "alice_csr" created
Step 4 As the cluster administrator approve the CSR:
$ kubectl get csr
NAME AGE REQUESTOR CONDITION
alice_csr 23s kubernetes-admin Pending
$ kubectl certificate approve alice_csr
certificatesigningrequest "alice_csr" approved
Step 5 Retrive the signed certificate for the new user:
$ kubectl get csr alice_csr -o jsonpath='{.status.certificate}' | base64 -d > alice.crt
Step 6 Transfer certificates to the new user. The specifics of doing so will depend on your situation, but in our lab environment we will just create a new login account for the user and copy the certificates to their home directory:
$ sudo useradd -b /home -m -s /bin/bash -c "I work here" alice
$ sudo passwd alice
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully
$ sudo mkdir -p ~alice/keys && sudo cp -a ~/users/alice/*.{key,crt} ~alice/keys
$ sudo chmod 400 ~alice/keys/* && sudo chown -R alice:alice ~alice/keys
Step 1 Retrieve the value for k8s-api by running the below command:
stack@master:~$ kubectl cluster-info
Kubernetes master is running at https://172.16.1.132:6443
KubeDNS is running at https://172.16.1.132:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
So in this case the Kubernetes api server is reachable at https://172.16.1.132:6443 Copy down your specific Kubernetes api server address for the next step.
Step 2 Log-in as alice user and configure kubectl, with the k8s-api obtained from the above step:
stack@master:~$ ssh alice@localhost
alice@master:~$ kubectl config set-cluster work --server=<k8s-api> --certificate-authority=keys/ca.crt --embed-certs=true
Cluster "work" set.
Step 3 Test cluster access:
alice@master:~$ kubectl cluster-info
Kubernetes master is running at http://localhost:8080
Step 4 Configure user credentials:
alice@master:~$ kubectl config set-credentials alice --client-certificate=keys/alice.crt --client-key=keys/alice.key
User "alice" set.
alice@master:~$ kubectl config set-context work --cluster=work --user=alice --namespace=development
Context "work" created.
alice@master:~$ kubectl config use-context work
Switched to context "work".
Step 5 Test your capabilities:
alice@master:~$ kubectl auth can-i create pod
yes
alice@master:~$ kubectl auth can-i list deployment
yes
alice@master:~$ kubectl auth can-i get pod --subresource=log
no
alice@master:~$ kubectl auth can-i create pod --subresource=exec
no
Congratulations - you have now added a new user with developer capabilities to your Kubernetes cluster.
Step 6 Log-out of Alice user for the upcoming steps:
alice@master:~$ logout
stack@master:~$
You can enable Kubernetes apiserver to record security related chronological records for audit trail. In a production environment these records must be stored in an external repository for retrieval and analysis, but in this lab we will simply stream them to the apisever stdout. We will also severely limit the number of audits in order to make it manageable for a lab environment.
Step 1 Starting with v1.9.x, Advanced Auditing requires an Audit Policy resource with defined rules. Apiserver will not log any audit records without specified rules, even if auditing is enabled. Define a minimal Policy to record changes the Pod metadata in file audit-policy-pod.yaml:
apiVersion: audit.k8s.io/v1beta1
kind: Policy
# Don't generate audit events for all requests in RequestReceived stage.
omitStages:
- "RequestReceived"
rules:
# Log "pods/log", "pods/status" at Metadata level
- level: Metadata
resources:
- group: ""
resources: ["pods/log", "pods/status"]
Copy the policy to /etc/kubernetes directory:
$ sudo cp audit-policy-pod.yaml /etc/kubernetes
Step 2 We must modify the startup parameters of apiserver to include audits. Our lab environment is installed by kubeadm. Kubeadm bootstraps kube-apiserver using a containerized image by placing a Pod template in /etc/kubernetes/manifests. It then installs kubelet as a systemd service with options to monitor the manifest directory for any file changes. Therefore kubelet is used to bootstrap kube-apiserver as a static pod.
To change the configuration of the apiserver we must copy the pod manifest to a local directory, add options for audit and audit policy, and volume mounts so audit-policy-pod.yaml is accessible inside the apiserver container:
$ sudo cp /etc/kubernetes/manifests/kube-apiserver.yaml .
$ sudo chown stack ./kube-apiserver.yaml
Important
Do not edit /etc/kubernetes/manifests/kube-apiserver.yaml in place as this will cause kubelet to behave improperly
Edit the local copy of kube-apiserver.yaml and change:
spec:
containers:
- command:
- kube-apiserver
+ - --audit-log-path=-
+ - --audit-policy-file=/etc/kubernetes/audit-policy.yaml
- --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
....
volumeMounts:
+ - mountPath: /etc/kubernetes/audit-policy.yaml
+ name: audit-policy
+ readOnly: true
- mountPath: /etc/kubernetes/pki
....
volumes:
- hostPath:
path: /etc/kubernetes/pki
type: DirectoryOrCreate
name: k8s-certs
- hostPath:
path: /etc/ssl/certs
type: DirectoryOrCreate
name: ca-certs
+ - hostPath:
+ path: /etc/kubernetes/audit-policy-pod.yaml
+ type: File
+ name: audit-policy
Step 3 In a new window login to your master node and watch the kubelet logs, then overwrite the apiserver manifest in the kubelet manifest directory:
ssh stack@<publicip>
$ sudo journalctl -f -u kubelet | grep apiserver
In the original window overwrite the apiserver manifest:
$ sudo cp kube-apiserver.yaml /etc/kubernetes/manifests
Wait until kubelet restarts the apiserver. Then watch the audit messages from the apiserver:
$ kubectl logs --follow kube-apiserver-master --namespace=kube-system
In your second window exit (^C) from journalctl. Start/stop a pod to see the audit-trail:
$ kubectl run --image=k8s.gcr.io/echoserver:1.4 es
.....
$ kubectl delete deployment es
NetworkPolicy resources use labels to select pods and define rules which specify what traffic is allowed to the selected pods. By default pods are non-isolated and accept traffic from any source. Pods become isolated when they are selected by a NetworkPolicy which can reject traffic not allowed for them. In this lab we will isolate an nginx deployment in the development nampespace from all traffic except from specific pods.
Step 1 In window one login as user alice and launch an nginx deployment and expose it:
ssh alice@<publicip>
$ kubectl run nginx --replicas=2 --image=nginx --port=80 --expose
service "nginx" created
deployment "nginx" created
Step 2 In window two login as the kubernetes admin and redefine the developer role so they can create NetworkPolicy, as well get logs and attach to pods:
ssh stack@<publicip>
$ vi developer-role.yaml
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: developer
rules:
- apiGroups: ["", "batch", "autoscaling", "extensions", "apps"]
resources:
- "statefulsets"
- "horizontalpodautoscalers"
- "jobs"
- "replicationcontrollers"
- "services"
- "deployments"
- "replicasets"
- "pods"
- pods/attach
- pods/log
- pods/exec
- pods/proxy
- pods/portforward
- networkpolicies
verbs:
- "*"
Update the developer role:
$ kubectl apply -f developer-role.yaml
clusterrole "developer" configured
Step 3 In window one make sure you are authorized to view logs and exec to pods:
alice@master:~$ kubectl auth can-i get pod --subresource=log
yes
alice@master:~$ kubectl auth can-i create pod --subresource=exec
yes
Step 4 In window one as alice create a pod to query the nginx deployment:
alice@master:~$ kubectl run access --image busybox --restart=Never -it --rm -- wget -q nginx -T 5 -O -
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
Step 5 In window two as kubernetes admin query the nginx deployment in the development namespace:
$ kubectl run access --image busybox --restart=Never -it --rm -- wget -q nginx.development -T 5 -O -
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
Step 6 In window one as alice create a NetworkPolicy that denies traffic to all pods:
alice@master:~$ vi policy-default-deny.yaml
kind: NetworkPolicy
apiVersion: extensions/v1beta1
metadata:
name: default-deny
namespace: development
spec:
podSelector:
matchLabels: {}
Apply the policy:
alice@master:~$ kubectl apply -f policy-default-deny.yaml
networkpolicy "default-deny" created
Check connectivity again:
alice@master:~$ kubectl run access --image busybox --restart=Never -it --rm -- wget -q nginx -T 5 -O -
If you don't see a command prompt, try pressing enter.
wget: download timed out
pod development/access terminated (Error)
Step 7 In window two as kubernetes admin try connecting to the nginx deployment again:
$ kubectl run access --image busybox --restart=Never -it --rm -- wget -q nginx.development -T 5 -O -
If you don't see a command prompt, try pressing enter.
wget: download timed out
pod default/access terminated (Error)
Step 8 In window one as alice create a new policy to allow access from any pod with label run=access:
alice@master:~$ vi policy-access-nginx.yaml
kind: NetworkPolicy
apiVersion: extensions/v1beta1
metadata:
name: access-nginx
spec:
podSelector:
matchLabels:
run: nginx
ingress:
- from:
- podSelector:
matchLabels:
run: access
Apply the policy:
alice@master:~$ kubectl apply -f policy-access-nginx.yaml
networkpolicy "access-nginx" created
Try accessing the pod again:
alice@master:~$ kubectl run access --image busybox --restart=Never -it --rm -- wget -q nginx -T 5 -O -
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
Delete the default-deny policy:
alice@master:~$ kubectl delete networkpolicy default-deny
networkpolicy "default-deny" deleted
Step 9 In window two as kubernetes admin try connecting to the nginx deployment again:
$ kubectl run access --image busybox --restart=Never -it --rm -- wget -q nginx.development -T 5 -O -
If you don't see a command prompt, try pressing enter.
wget: download timed out
pod default/access terminated (Error)
Step 10 In window one as alice update policy to allow access from any namespace with label project=friendly:
alice@master:~$ vi policy-access-nginx.yaml
kind: NetworkPolicy
apiVersion: extensions/v1beta1
metadata:
name: access-nginx
spec:
podSelector:
matchLabels:
run: nginx
ingress:
- from:
- podSelector:
matchLabels:
run: access
- namespaceSelector:
matchLabels:
project: friendly
Apply the policy:
alice@master:~$ kubectl apply -f policy-access-nginx.yaml
networkpolicy "access-nginx" configured
Step 11 In window two as kubernetes admin update your namespace labels and try connecting to the nginx deployment again:
$ kubectl label ns default project=friendly
namespace "default" labeled
$ kubectl run access --image busybox --restart=Never -it --rm -- wget -q nginx.development -T 5 -O -
.....
<html>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
A security context defines the operating system security constraints (uid, gid, capabilities, SELinux role, etc..) applied to a container in order to:
Kubernetes allows setting Pod level and Container level Security Context. Some PodSecurityContext can be overridden at the Container SecurityContext level, and some like the fsGroup are only applicable at the Pod level.
The RBAC rules we enabled for development group only controls what a Kubernetes user is allowed to do. The pods that the user launches also have an identity in the form of a ServiceAccount, however, the roles granted to the ServiceAccount only control authorization related to the Kubernetes API (control plane). Any container is capable of being equivalent to root by running in privileged mode.
Step 1 Login as user alice and launch a privileged application. Create the file privileged-pod.yaml:
apiVersion: v1
kind: Pod
metadata:
name: privileged-pod
spec:
hostNetwork: true
containers:
- name: privileged-container
image: gcr.io/google-containers/busybox
command: ["sleep"]
args: ["86400"]
securityContext:
privileged: true
Launch the pod:
alice@master:~$ kubectl create -f privileged-pod.yaml
pod "privileged-pod" created
Step 2 Connect to the privileged-container and examine its capabilities:
alice@master:~$ kubectl exec privileged-pod -it -- sh
/ # ifconfig eth0
eth0 Link encap:Ethernet HWaddr 02:C8:6A:AD:9B:86
inet addr:172.16.1.214 Bcast:172.16.1.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:9001 Metric:1
RX packets:192737 errors:0 dropped:0 overruns:0 frame:0
TX packets:87865 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:236722336 (225.7 MiB) TX bytes:8512687 (8.1 MiB)
/ # route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 172.16.1.1 0.0.0.0 UG 0 0 0 eth0
172.16.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.104.0 0.0.0.0 255.255.255.192 U 0 0 0 *
192.168.166.128 172.16.1.107 255.255.255.192 UG 0 0 0 tunl0
192.168.219.64 172.16.1.72 255.255.255.192 UG 0 0 0 tunl0
/ # exit
alice@master:~$
As you can see any user who can launch this container can effectively act as root on the node where the container is running.
To limit a container’s access to operating system features (data plane) we must enforce POD and container SecurityContext. Without PodSecurityPolicy enforcement by the Admission Controller any SecurityContext for a container is at the discretion of the user who launches the pod. Let’s enable PodSecurityPolicy admission control plugin:
Step 1 In window one login as user stack and monitor the kubelet logs on the Kubernetes master node:
$ sudo journalctl -f -u kubelet | grep apiserv
Step 2 In window two login as user stack and modify /etc/kubernetes/manifests/kube-apiserver.yaml to enable PodSecurityPolicy Admission Controller plugin. Do not edit the file in place:
$ sudo cp /etc/kubernetes/manifests/kube-apiserver.yaml .
$ sudo chown stack ./kube-apiserver.yaml
Keep a backup copy in case you have to restore to default settings:
$ cp kube-apiserver.yaml kube-apiserver.yaml.bak
$ vi kube-apiserver.yaml
Make the below modification:
$ sudo diff kube-apiserver.yaml.bak kube-apiserver.yaml
23c23
< - --enable-admission-plugins=NodeRestriction
---
> - --enable-admission-plugins=NodeRestriction,PodSecurityPolicy
Overwrite the original file:
$ sudo cp kube-apiserver.yaml /etc/kubernetes/manifests/
Observe the log entries in window one:
Jun 23 04:51:32 master kubelet[11195]: W0623 04:51:32.840286 11195 status_manager.go:459] Failed to get status for pod "kube-apiserver-master_kube-system(b0b95db9702dd4f1a8a2f3f69253ea0c)": an error on the server ("Apisever is shutting down.") has prevented the request from succeeding (get pods kube-apiserver-master)
Jun 23 04:51:32 master kubelet[11195]: W0623 04:51:32.841545 11195 kubelet.go:1591] Deleting mirror pod "kube-apiserver-master_kube-system(a763a50f-7688-11e8-ba5c-027f34f9bbdc)" because it is outdated
Jun 23 04:51:32 master kubelet[11195]: E0623 04:51:32.841808 11195 mirror_client.go:88] Failed deleting a mirror pod "kube-apiserver-master_kube-system": Delete https://172.16.1.72:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-master: dial tcp 172.16.1.72:6443: getsockopt: connection refused
Jun 23 04:51:32 master kubelet[11195]: E0623 04:51:32.842059 11195 kubelet.go:1606] Failed creating a mirror pod for "kube-apiserver-master_kube-system(b0b95db9702dd4f1a8a2f3f69253ea0c)": Post https://172.16.1.72:6443/api/v1/namespaces/kube-system/pods: dial tcp 172.16.1.72:6443: getsockopt: connection refused
Jun 23 04:51:32 master kubelet[11195]: E0623 04:51:32.858138 11195 reflector.go:322] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to watch *v1.Pod: Get https://172.16.1.72:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dmaster&resourceVersion=540&timeoutSeconds=392&watch=true: dial tcp 172.16.1.72:6443: getsockopt: connection refused
Jun 23 04:51:32 master kubelet[11195]: I0623 04:51:32.997855 11195 reconciler.go:217] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/b0b95db9702dd4f1a8a2f3f69253ea0c-k8s-certs") pod "kube-apiserver-master" (UID: "b0b95db9702dd4f1a8a2f3f69253ea0c")
Jun 23 04:51:32 master kubelet[11195]: I0623 04:51:32.997874 11195 reconciler.go:217] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/b0b95db9702dd4f1a8a2f3f69253ea0c-ca-certs") pod "kube-apiserver-master" (UID: "b0b95db9702dd4f1a8a2f3f69253ea0c")
Jun 23 04:51:33 master kubelet[11195]: W0623 04:51:33.572222 11195 kubelet.go:1591] Deleting mirror pod "kube-apiserver-master_kube-system(a763a50f-7688-11e8-ba5c-027f34f9bbdc)" because it is outdated
Jun 23 04:51:38 master kubelet[11195]: W0623 04:51:38.451964 11195 status_manager.go:474] Failed to update status for pod "kube-apiserver-master_kube-system(a763a50f-7688-11e8-ba5c-027f34f9bbdc)": Operation cannot be fulfilled on pods "kube-apiserver-master": the object has been modified; please apply your changes to the latest version and try again
The error we observe is because kubelet is trying to create a mirror pod entry for the kube-apiserver pod it just launched, but the Admission Controller in the same apiserver is rejecting the entry, and disallowing it to be persisted in etcd.
Step 3 We can fix the above issue by binding a PodSecurityPolicy to kubelet which allows it to run as a privileged user. First, create a policy with sufficient permissions:
$ kubectl apply -f - <<EOF
apiVersion: extensions/v1beta1
kind: PodSecurityPolicy
metadata:
name: privileged
spec:
privileged: true
allowedCapabilities:
- '*'
volumes:
- '*'
hostNetwork: true
hostPorts:
- min: 0
max: 65535
hostIPC: true
hostPID: true
runAsUser:
rule: 'RunAsAny'
seLinux:
rule: 'RunAsAny'
supplementalGroups:
rule: 'RunAsAny'
fsGroup:
rule: 'RunAsAny'
EOF
podsecuritypolicy "privileged" created
Check the policy:
$ kubectl get psp
NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP READONLYROOTFS VOLUMES
privileged true [*] RunAsAny RunAsAny RunAsAny RunAsAny false [*]
Step 4 When a PodSecurityPolicy resource is created, it does nothing. In order to use it, the requesting user or target pod’s service account must be authorized to use the policy, by allowing the use verb on the policy.
We can check if kubelet is allowed to use the policy by impersonating the user and group it authenticates as and query the authorization API using the SubjectAccessReview.
Query the authorization API as user system:node:master and group system:nodes:
$ kubectl create -o yaml -f - <<EOF
apiVersion: authorization.k8s.io/v1
kind: SubjectAccessReview
spec:
resourceAttributes:
group: extensions
resource: podsecuritypolicies
name: privileged
verb: use
user: system:node:master
groups:
- system:nodes
EOF
apiVersion: authorization.k8s.io/v1
kind: SubjectAccessReview
metadata:
creationTimestamp: null
spec:
groups:
- system:nodes
resourceAttributes:
group: extensions
name: privileged
resource: podsecuritypolicies
verb: use
user: system:node:master
status:
allowed: false
Step 5 To fix the above let’s modify the ClusterRole system:node to give access to the privileged PodSecurityPolicy, and then bind that role to the group system:nodes so any kubelet is permitted to do what the policy authorizes and add the highlighted section below rules:
$ kubectl edit clusterrole system:node
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
creationTimestamp: 2018-06-25T00:23:36Z
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:node
resourceVersion: "17121"
selfLink: /apis/rbac.authorization.k8s.io/v1/clusterroles/system%3Anode
uid: 0007abc4-780e-11e8-aee0-021d19e61c3c
rules:
- apiGroups:
- extensions
resourceNames:
- privileged
resources:
- podsecuritypolicies
verbs:
- use
- apiGroups:
- authentication.k8s.io
resources:
- tokenreviews
verbs:
- create
...
Step 6 The system:node ClusterRole is updated. Create a ClusterRoleBinding for kubelet‘s group:
$ kubectl create clusterrolebinding node-psp-binding --clusterrole=system:node --group=system:nodes
clusterrolebinding "node-psp-binding" created
Check the authorization API for any node in the cluster. The status should now be allowed: true:
$ kubectl create -o yaml -f - <<EOF
apiVersion: authorization.k8s.io/v1
kind: SubjectAccessReview
spec:
resourceAttributes:
group: extensions
resource: podsecuritypolicies
name: privileged
verb: use
user: system:node:node1
groups:
- system:nodes
EOF
Congratulations - you have now enabled Pod Security Policy in your Kubernetes cluster and authorized kubelet to use the privileged policy.
In the newer versions of Kubernetes, the Calico plugin must also be authorized for it to be able to establish network connections for the created pods. Calico runs in as ServiceAccount calico-node. We can edit the ClusterRole this ServiceAccount binds to and make the needed change:
$ kubectl edit -n kube-system clusterrole calico-node
....
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
....
rules:
- apiGroups:
- extensions
resourceNames:
- privileged
resources:
- podsecuritypolicies
verbs:
- use
- apiGroups:
- ""
resources:
- namespaces
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- pods/status
verbs:
- update
- apiGroups:
- ""
....
Once the PodSecurityPolicy admission control is enabled pods without a valid security policy cannot be created. It is therefore recommended that for existing clusters policies are added and authorized before enabling the admission controller PodSecurityPolicy.
Step 1 As user alice Create a pod that runs as a non-root user as user alice in file secctx-demo1.yaml:
apiVersion: v1
kind: Pod
metadata:
name: security-context-demo
spec:
securityContext:
runAsUser: 1000
fsGroup: 2000
volumes:
- name: sec-ctx-vol
emptyDir: {}
containers:
- name: sec-ctx-demo
image: gcr.io/google-samples/node-hello:1.0
volumeMounts:
- name: sec-ctx-vol
mountPath: /data/demo
securityContext:
allowPrivilegeEscalation: false
Try to create the pod:
alice@master:~$ kubectl create -f secctx-demo1.yaml
Error from server (Forbidden): error when creating "secctx-demo1.yaml": pods "security-context-demo" is forbidden: unable to validate against any pod security policy: []
Step 2 As the kubernetes admin create an appropriate PodSecurityPolicy for developers in file restricted-psp.yaml:
apiVersion: extensions/v1beta1
kind: PodSecurityPolicy
metadata:
name: restricted
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: 'docker/default'
apparmor.security.beta.kubernetes.io/allowedProfileNames: 'runtime/default'
seccomp.security.alpha.kubernetes.io/defaultProfileName: 'docker/default'
apparmor.security.beta.kubernetes.io/defaultProfileName: 'runtime/default'
spec:
privileged: false
# Required to prevent escalations to root.
allowPrivilegeEscalation: false
# This is redundant with non-root + disallow privilege escalation,
# but we can provide it for defense in depth.
requiredDropCapabilities:
- ALL
# Allow core volume types.
volumes:
- 'configMap'
- 'emptyDir'
- 'projected'
- 'secret'
- 'downwardAPI'
# Assume that persistentVolumes set up by the cluster admin are safe to use.
- 'persistentVolumeClaim'
hostNetwork: false
hostIPC: false
hostPID: false
runAsUser:
# Require the container to run without root privileges.
rule: 'MustRunAsNonRoot'
seLinux:
# This policy assumes the nodes are using AppArmor rather than SELinux.
rule: 'RunAsAny'
supplementalGroups:
rule: 'MustRunAs'
ranges:
# Forbid adding the root group.
- min: 1
max: 65535
fsGroup:
rule: 'MustRunAs'
ranges:
# Forbid adding the root group.
- min: 1
max: 65535
readOnlyRootFilesystem: false
Create the policy:
$ kubectl apply -f restricted-psp.yaml
podsecuritypolicy "restricted" created
Step 3 We might be tempted to bind the restricted psp to user alice or group developers to complete our solution, however this would not be ideal, because most Kubernetes pods are not created directly by users. Instead, they are typically created indirectly as part of a Deployment, ReplicaSet, or other templated controllers via the controller managers.
Granting the controller access to the policy would not work because it would grant access for all pods created by that the controller. The preferred method for authorizing policies is to grant access to the service account the pod runs as.
Modify file developer-role.yaml as follows and redefine the developer ClusterRole:
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: developer
rules:
- apiGroups: ["", "batch", "autoscaling", "extensions", "apps"]
resources:
- "serviceaccounts"
- "statefulsets"
- "horizontalpodautoscalers"
- "jobs"
- "cronjobs"
- "replicationcontrollers"
- "services"
- "deployments"
- "replicasets"
- "pods"
- pods/attach
- pods/log
- pods/exec
- pods/proxy
- pods/portforward
- networkpolicies
verbs:
- "*"
- apiGroups: ["extensions"]
resources:
- podsecuritypolicies
resourceNames:
- restricted
verbs:
- use
Apply the role so developer role has access to PodSecurityPolicy restricted:
$ kubectl apply -f developer-role.yaml
clusterrole "developer" configured
Step 4 As user alice try to launch the non-root pod again:
alice@master:~$ kubectl create -f secctx-demo1.yaml
pod "security-context-demo" created
You have now authorized users in group developers to launch pods in namespace development. This is because of the developer-binding rolebinding we created in section 8.1.1, step 3:
$ kubectl get rolebinding -n development -o yaml
apiVersion: v1
items:
- apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
creationTimestamp: 2018-06-25T00:36:06Z
name: developer-binding
namespace: development
resourceVersion: "1508"
selfLink: /apis/rbac.authorization.k8s.io/v1/namespaces/development/rolebindings/developer-binding
uid: bf361a00-780f-11e8-aee0-021d19e61c3c
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: developer
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: developers
kind: List
metadata:
resourceVersion: ""
selfLink: ""
Step 5 However, recall that pods launched by controllers are only authorized by the controller and a serviceaccount identity. The above rolebinding does not grant any privileges to serviceaccounts in namespace development, so controllers will fail to launch pods in the namespace.
To demonstrate, as user alice create the file secctx-demo-rc.yaml as follows:
apiVersion: v1
kind: ReplicationController
metadata:
name: security-context-demo
spec:
replicas: 2
selector:
app: security-context-demo
template:
metadata:
name: security-context-demo
labels:
app: security-context-demo
spec:
securityContext:
runAsUser: 1000
fsGroup: 2000
volumes:
- name: sec-ctx-vol
emptyDir: {}
containers:
- name: sec-ctx-demo
image: gcr.io/google-samples/node-hello:1.0
volumeMounts:
- name: sec-ctx-vol
mountPath: /data/demo
securityContext:
allowPrivilegeEscalation: false
Create the ReplicationController:
alice@master:~$ kubectl create -f secctx-demo-rc.yaml
replicationcontroller "security-context-demo" created
Check its status:
alice@master:~$ kubectl get rc security-context-demo -o jsonpath='{.status..message}{"\n"}'
pods "security-context-demo-" is forbidden: unable to validate against any pod security policy: []
As you can see we can create the ReplicationController, but the ReplicationController cannot create the pods.
Step 6 Modify the file developer-role-binding.yaml to give access to any serviceaccount in namespace development:
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: developer-binding
namespace: development
subjects:
- kind: Group
name: developers
apiGroup: ""
- kind: Group
name: system:serviceaccounts:development
apiGroup: ""
roleRef:
kind: ClusterRole
name: developer
apiGroup: ""
Apply the RoleBinding:
$ kubectl apply -f developer-role-binding.yaml
rolebinding "developer-binding" configured
As user alice try to launch the ReplicationControll again:
alice@master:~$ kubectl apply -f secctx-demo-rc.yaml
replicationcontroller "security-context-demo" configured
Congratulations - you have now authorized users in group developers and serviceaccounts in namespace development to launch pods in the namespace.
As cluster admins we may be happy with our security polices at the cluster level now, but application developers now have to make sure their applications meet the more strict criteria in order to run. This may not be trivial.
Step 1 As user alice try to create a very basic pod:
alice@master:~$ kubectl create -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
name: echoserver
spec:
containers:
- name: echoserver
image: gcr.io/google_containers/echoserver:1.4
ports:
- containerPort: 8080
EOF
pod "echoserver" created
Check the result:
alice@master:~$ kubectl get pod echoserver -o jsonpath='{.status.containerStatuses[:].state}{"\n"}'
map[waiting:map[reason:CreateContainerConfigError message:container has runAsNonRoot and image will run as root]]
The echoserver application is based on nginx. As a developer we must run nginx as a non-root user. This requires intimate knowledge of nginx and the Linux operating system.