In this section, we will learn how to install a multi-node Kubernetes cluster using simple scripts from the Mirantis repository.
The scripts leverage existing tools from the Open Source community and is outlined in order of tasks you would do if you had your own set of clusters without any scripts.
Chapter Details | |
---|---|
Chapter Goal | Install and tear down a Kubernetes cluster |
Chapter Sections |
In this chapter, you will learn how to setup a multi-node Kubernetes cluster.
Step 1 Log-in to your Kubernetes master node using stack as the user name and b00tcamp as your password:
user@laptop:~$ ssh stack@<master-IP>
Step 2 Take a look at the /etc/hosts file to view your node information. At the bottom of the file you should see the following information about your nodes. Note that your IP addresses will be different than the one listed here:
stack@master:~$ cat /etc/hosts
...
172.16.1.120 node1
172.16.1.108 node2
172.16.1.197 master
Your minion nodes are available from your master node from the virtual private network setup by the cloud provider of the lab. Furthermore, they are each assigned a hostname that the master node can reference them by.
Step 3 Ping each of your nodes to ensure that they are reachable:
stack@master:~$ ping -c 4 node1
...
--- node1 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 2998ms
stack@master:~$ ping -c 4 node2
...
--- node2 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 2997ms
rtt min/avg/max/mdev = 0.323/0.375/0.416/0.036 ms
From this point on, your minions will be referred to as node1 and node2.
Lastly, for your convenience we provide a variable called $PrivateIP which returns your lab’s private IP address. This in effect is the IP address that will be used by the minions to connect to the master node.
Step 4 Check that you can get the PrivateIP address using the variable:
stack@master:~$ echo $PrivateIP
172.16.1.33
Note that your IP address may be different from the one listed above.
First, you will need to install Docker on all your nodes. To simplify the installation process, we have provided a set of convenience scripts for the installation.
Step 1 Take a look at the scripts in the install directory:
stack@master:~$ ls ~/k8s-examples/install/
Run the following script to install Docker, the script will also print the version of Docker installed:
stack@master:~$ sudo ~/k8s-examples/install/install-docker.sh
Step 2 Install docker using the script to node1:
stack@master:~$ ssh -t node1 "sudo -s && bash" < ~/k8s-examples/install/install-docker.sh
Step 3 Install docker using the script to node2:
stack@master:~$ ssh -t node2 "sudo -s && bash" < ~/k8s-examples/install/install-docker.sh
The command we ran will run a remote sudo session to node1 and node2. It will take the contents of ~/k8s-examples/install/install-docker.sh and execute them on the remote nodes.
Next, we will install kubeadm on all nodes. kubeadm is an easy-to-use cluster setup tool which will configure nodes to install and run Kubernetes in a cluster configuration. One of the nodes is then chosen to be a master node of Kubernetes.
Step 1 Install kubeadm on the master node:
stack@master:~$ sudo ~/k8s-examples/install/install-kadm.sh
Step 2 Install kubeadm on node1:
stack@master:~$ ssh -t node1 "sudo -s && bash" < ~/k8s-examples/install/install-kadm.sh
Step 3 Install kubeadm on node2:
stack@master:~$ ssh -t node2 "sudo -s && bash" < ~/k8s-examples/install/install-kadm.sh
In Kubernetes, kubectl is the client which the users interact with to issue commands to Kubernetes. This will be installed on the master node.
Step 1 Install bash completion for kubectl on the master node:
stack@master:~$ sudo ~/k8s-examples/install/install-kctl.sh
Notes
You will need to logout and log-in to your lab again for the bash completion to take effect
kubeadm we installed earlier must be initialized with a master node. Run the following commands to initialize the master
Step 1 First, we must generate a token to use for initializing the nodes:
stack@master:~$ token=$(kubeadm token generate)
Take a look at the token:
stack@master:~$ echo $token
0c182c.dd75ec67d8ab5ed7
Since we will use this token in the next few steps, we saved it as a token variable to easily reference the value.
Step 2 Initialize the master node using kubeadm init. Make sure to replace the token field with the one you have generated:
stack@master:~$ sudo kubeadm init --token $token \
--kubernetes-version 1.17.4 \
--apiserver-advertise-address $PrivateIP \
--pod-network-cidr 192.168.0.0/16
# Notice the following output at the end
kubeadm join --token 0c182c.dd75ec67d8ab5ed7 172.16.1.33:6443
--discovery-token-ca-cert-hash sha256:78a870e2b459db84d14a4287ac514bd679069cc35d1f44ae75c50391f63b7552
Step 3 Take note of the output of this command, and save the value of discovery-token-ca-cert-hash which we will use to securely initialize our worker nodes later:
stack@master:~$ token_hash=<hash-value>
Another way to view the token after kubeadm init is the following:
stack@master:~$ sudo kubeadm token list
TOKEN TTL EXPIRES USAGES ...
ef9247.c020d3d2d2828292 23h 2020-03-31T14:59:46Z authentication,signing ...
Step 4 Copy the admin configuration file to the home directory:
stack@master:~$ mkdir -p $HOME/.kube
stack@master:~$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
stack@master:~$ sudo chown stack:stack ~/.kube/config
Step 5 We must setup a networking plugin for Kubernetes. We can use Calico CNI plugin for our purposes. The below manifests are compatible with Kubernetes v1.15.1:
stack@master:~$ kubectl apply -f $HOME/k8s-examples/addons/calico/kube-calico.yaml
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
Step 6 Copy the kubeadm join command output from the above and run them on node1:
stack@master:~$ ssh node1 \
"sudo kubeadm join --token $token $PrivateIP:6443 --discovery-token-ca-cert-hash $token_hash"
Step 7 Run the same command on node2:
stack@master:~$ ssh node2 \
"sudo kubeadm join --token $token $PrivateIP:6443 --discovery-token-ca-cert-hash $token_hash"
Step 8 Using kubectl install the Kubernetes core Metrics Server. Metrics Server is a trimmed down implementation available since v1.8 that exposes core Kubernetes metrics via metrics API:
stack@master:~$ kubectl apply -f $HOME/k8s-examples/extensions/metrics-server
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
serviceaccount/metrics-server created
deployment.apps/metrics-server created
service/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
Step 9 Using kubectl check all nodes for its status. It may take around 2 minutes for all nodes to change to Ready STATUS. Wait until all nodes are Ready and then exit the command using Cntrl-C:
stack@master:~$ kubectl get nodes --watch
NAME STATUS ROLES AGE VERSION
....
NAME STATUS ROLES AGE VERSION
master Ready master 7m22s v1.17.4
node1 Ready <none> 2m55s v1.17.4
node2 Ready <none> 2m42s v1.17.4
Step 10 Using kubectl top check CPU and memory usage in your cluster. It may take a couple of minutes for the metrics server to ingest metrics data and provide the expected output:
stack@master:~$ kubectl top node
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
master 109m 5% 1673Mi 43%
node1 36m 1% 769Mi 19%
node2 32m 1% 804Mi 20%
Step 11 Using kubectl top check CPU and memory usage of your Kubernetes control-plane:
stack@master:~$ kubectl top pod --namespace kube-system
NAME CPU(cores) MEMORY(bytes)
calico-node-l5zsk 14m 47Mi
calico-node-q2l27 14m 45Mi
calico-node-vl9v7 13m 56Mi
coredns-78fcdf6894-tpbtj 2m 7Mi
coredns-78fcdf6894-v86mb 2m 7Mi
etcd-master 11m 59Mi
kube-apiserver-master 20m 437Mi
kube-controller-manager-master 19m 63Mi
kube-proxy-9kw65 2m 12Mi
kube-proxy-nfxjc 2m 12Mi
kube-proxy-qsgkf 2m 13Mi
kube-scheduler-master 7m 13Mi
metrics-server-5c4945fb9f-mhnvq 1m 14Mi
Congratulations! You have successfully setup a 3 node Kubernetes cluster.