5. Kubernetes Installation

In this section, we will learn how to install a multi-node Kubernetes cluster using simple scripts from the Mirantis repository.

The scripts leverage existing tools from the Open Source community and is outlined in order of tasks you would do if you had your own set of clusters without any scripts.

Chapter Details
Chapter Goal Install and tear down a Kubernetes cluster
Chapter Sections

5.1. Discover your K8s minion nodes

So far, you have been working on a single machine which will become your Kubernetes master node.

In this chapter, you will learn how to setup a multi-node Kubernetes cluster.

Step 1 Log-in to your Kubernetes master node using stack as the user name and b00tcamp as your password:

user@laptop:~$ ssh stack@<master-IP>

Step 2 Take a look at the /etc/hosts file to view your node information. At the bottom of the file you should see the following information about your nodes. Note that your IP addresses will be different than the one listed here:

stack@master:~$ cat /etc/hosts
...
172.16.1.120  node1
172.16.1.108  node2
172.16.1.197  master

Your minion nodes are available from your master node from the virtual private network setup by the cloud provider of the lab. Furthermore, they are each assigned a hostname that the master node can reference them by.

Step 3 Ping each of your nodes to ensure that they are reachable:

stack@master:~$ ping -c 4 node1
...
--- node1 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 2998ms

stack@master:~$ ping -c 4 node2
...
--- node2 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 2997ms
rtt min/avg/max/mdev = 0.323/0.375/0.416/0.036 ms

From this point on, your minions will be referred to as node1 and node2.

Lastly, for your convenience we provide a variable called $PrivateIP which returns your lab’s private IP address. This in effect is the IP address that will be used by the minions to connect to the master node.

Step 4 Check that you can get the PrivateIP address using the variable:

stack@master:~$ echo $PrivateIP
172.16.1.33

Note that your IP address may be different from the one listed above.

5.2. Install Docker on Nodes

First, you will need to install Docker on all the nodes. Your master node already has Docker installed from a previous exercise.

Step 1 Check the Docker installation on your master node:

stack@master:~$ docker version
Client:
 Version:      17.03.2-ce
 API version:  1.27
 Go version:   go1.7.5
 Git commit:   f5ec1e2
 Built:        Tue Jun 27 03:35:14 2017
 OS/Arch:      linux/amd64

Server:
 Version:      17.03.2-ce
 API version:  1.27 (minimum version 1.12)
 Go version:   go1.7.5
 Git commit:   f5ec1e2
 Built:        Tue Jun 27 03:35:14 2017
 OS/Arch:      linux/amd64
 Experimental: false

Step 2 Install docker using the script to node1:

stack@master:~$ ssh -t stack@node1 "sudo -s && bash" < ~/k8s-examples/install/install-docker.sh

Step 3 Install docker using the script to node2:

stack@master:~$ ssh -t stack@node2 "sudo -s && bash" < ~/k8s-examples/install/install-docker.sh

The command we ran will run a remote sudo session to node1 and node2. It will take the contents of ~/k8s-examples/install/install-docker.sh and execute them on the remote nodes.

5.3. Install Cluster Components

Next, we will install kubeadm on all nodes. kubeadm is an easy-to-use cluster setup tool which will configure nodes to install and run Kubernetes in a cluster configuration. One of the nodes is then chosen to be a master node of Kubernetes.

Step 1 Install kubeadm on the master node:

stack@master:~$ sudo ~/k8s-examples/install/install-kadm.sh

Step 2 Install kubeadm on node1:

stack@master:~$ ssh -t stack@node1 "sudo -s && bash" < ~/k8s-examples/install/install-kadm.sh

Step 3 Install kubeadm on node2:

stack@master:~$ ssh -t stack@node2 "sudo -s && bash" < ~/k8s-examples/install/install-kadm.sh

5.4. Install Kubectl on Master

In Kubernetes, kubectl is the client which the users interact with to issue commands to Kubernetes. This must be installed on the master node.

Step 1 Install kubectl on the master node:

stack@master:~$ sudo ~/k8s-examples/install/install-kctl.sh

5.5. Initialize Kubernetes Cluster

kubeadm we installed earlier must be initialized with a master node. Run the following commands to initialize the master

Step 1 First, we must generate a token to use for initializing the nodes:

stack@master:~$ token=$(kubeadm token generate)

Take a look at the token:

stack@master:~$ echo $token
0c182c.dd75ec67d8ab5ed7

Since we will use this token in the next few steps, we saved it as a token variable to easily reference the value.

Step 2 Initialize the master node using kubeadm init. Make sure to replace the token field with the one you have generated:

stack@master:~$ sudo kubeadm init --token $token \
                              --kubernetes-version 1.11.1 \
                              --apiserver-advertise-address $PrivateIP \
                              --pod-network-cidr 192.168.0.0/16

# Notice the following output at the end
kubeadm join --token 0c182c.dd75ec67d8ab5ed7 172.16.1.33:6443
--discovery-token-ca-cert-hash sha256:78a870e2b459db84d14a4287ac514bd679069cc35d1f44ae75c50391f63b7552

Another way to view the token after kubeadm init is the following:

stack@master:~$ sudo kubeadm token list
TOKEN                     TTL       EXPIRES                USAGES                  ...
ef9247.c020d3d2d2828292   23h       2018-03-31T14:59:46Z   authentication,signing  ...

Step 3 Take note of the output of this command, and save the value of discovery-token-ca-cert-hash which we will use to securely initialize our worker nodes later:

stack@master:~$ token_hash=<hash-value>

Step 4 Copy the admin configuration file to the home directory:

stack@master:~$ mkdir -p $HOME/.kube
stack@master:~$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
stack@master:~$ sudo chown stack:stack ~/.kube/config

Step 5 We must setup a networking plugin for Kubernetes. We can use Calico CNI plugin for our purposes. The below manifests are compatible with Kubernetes v1.11.1:

stack@master:~$ kubectl apply -f \
  https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created

stack@master:~$ kubectl apply -f \
  https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
configmap/calico-config created
service/calico-typha created
deployment.apps/calico-typha created
daemonset.extensions/calico-node created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
serviceaccount/calico-node created

Step 6 Copy the kubeadm join command output from the above and run them on node1:

stack@master:~$ ssh stack@node1 \
"sudo kubeadm join --token $token $PrivateIP:6443 --discovery-token-ca-cert-hash $token_hash"

Step 7 Run the same command on node2:

stack@master:~$ ssh stack@node2 \
"sudo kubeadm join --token $token $PrivateIP:6443 --discovery-token-ca-cert-hash $token_hash"

Step 8 Using kubectl install the Kubernetes core Metrics Server. Metrics Server is a trimmed down implementation available since v1.8 that exposes core Kubernetes metrics via metrics API:

stack@master:~$ kubectl apply -f $HOME/k8s-examples/extensions/metrics-server-1.8+
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
serviceaccount/metrics-server created
deployment.extensions/metrics-server created
service/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created

Step 9 Using kubectl check all nodes for its status. It may take around 5 minutes for all nodes to change to Ready STATUS. Wait until all nodes are Ready and then exit the command using Cntrl-C:

stack@master:~$ kubectl get nodes --watch
NAME      STATUS    ROLES     AGE       VERSION
....
node1     Ready     <none>    2m        v1.11.1
node2     Ready     <none>    2m        v1.11.1
master    Ready     master    13m       v1.11.1

Step 10 Using kubectl top check CPU and memory usage in your cluster:

stack@master:~$ kubectl top node
NAME      CPU(cores)   CPU%      MEMORY(bytes)   MEMORY%
master    109m         5%        1673Mi          43%
node1     36m          1%        769Mi           19%
node2     32m          1%        804Mi           20%

Step 11 Using kubectl top check CPU and memory usage of your Kubernetes control-plane:

stack@master:~$ kubectl top pod --namespace kube-system
NAME                              CPU(cores)   MEMORY(bytes)
calico-node-l5zsk                 14m          47Mi
calico-node-q2l27                 14m          45Mi
calico-node-vl9v7                 13m          56Mi
coredns-78fcdf6894-tpbtj          2m           7Mi
coredns-78fcdf6894-v86mb          2m           7Mi
etcd-master                       11m          59Mi
kube-apiserver-master             20m          437Mi
kube-controller-manager-master    19m          63Mi
kube-proxy-9kw65                  2m           12Mi
kube-proxy-nfxjc                  2m           12Mi
kube-proxy-qsgkf                  2m           13Mi
kube-scheduler-master             7m           13Mi
metrics-server-5c4945fb9f-mhnvq   1m           14Mi

Congratulations! You have successfully setup a 3 node Kubernetes cluster.