In this lab, we will install a single-node Kubernetes cluster and explore it basic features.
Chapter Details | |
---|---|
Chapter Goal | Install and use a single-node Kubernetes cluster |
Chapter Sections |
Minikube (https://github.com/kubernetes/minikube) is a tool that allows running a single-node Kubernetes cluster in a virtual machine. It can be used on GNU/Linux os OS X and requires VirtualBox, KVM (for Linux), xhyve (OS X), or VMware Fusion (OS X) to be installed on your computer. Minikube creates a new virtual machine with GNU/Linux, installs and configures Docker and Kubernetes, runs a Kubernetes cluster. You can use Minikube on your laptop to explore Kubernetes features.
In this lab, we will install a single-node Kubernetes cluster locally using kubeadm. To simplify the installation we will install Docker from public Mirantis repository. The installation script will also install the matching kubectl Command Line Interface (CLI).
Step 1 Install kubeadm:
$ sudo ~/k8s-examples/install/install-kadm.sh
Step 2 Install the kubectl command line tool:
$ sudo ~/k8s-examples/install/install-k8s.sh
Step 3 The local Kubernetes cluster is up and running. Check that kubectl can connect to the Kubernetes cluster:
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.6", ...
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.6", ...
Step 4 Enable bash completion for kubectl client:
$ sudo ~/k8s-examples/install/install-kctl.sh
The kubectl tools allows connecting to multiple Kubernetes clusters. A set of parameters, for example, the address the Kubernetes API server, credentials is called a context. You can define several contexts and specify which context to use to connect to a specific cluster. Also you can specify a default context to use.
Step 1 Use kubectl config view to view the current kubectl configuration:
$ kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://172.16.1.23:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
As you see the current-context is the kubernetes-admin@kubernetes, which is the only context currently defined. In this case, kubectl uses the server of cluster: kubernetes as the address of the Kubernetes API server. The kubectl configuration in ~/.kube/config file, which was setup for you by our installation script.
We are going to begin declaring Kubernetes resources by writing YAML files containing resource definitions. To make your life easier we have added vim profile to turn tabs into two spaces.
Step 1 Check the /home/stack/.vimrc file and ensure it includes the following content:
set expandtab
set tabstop=2
Step 2 Define a new pod in the file echoserver-pod.yaml:
apiVersion: v1
kind: Pod
metadata:
name: echoserver
spec:
containers:
- name: echoserver
image: gcr.io/google_containers/echoserver:1.4
ports:
- containerPort: 8080
We use the existing image echoserver. This is a simple server that responds with the http headers it received. It runs on nginx server and implemented using lua in the nginx configuration: https://github.com/kubernetes/contrib/tree/master/ingress/echoheaders
Step 3 Create the echoserver pod:
$ kubectl create -f echoserver-pod.yaml
pod/echoserver created
Step 4 Use kubectl get pods to watch the pod get created:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
echoserver 1/1 Running 0 5s
Step 5 Now let’s get the pod definition back from Kubernetes:
$ kubectl get pods echoserver -o yaml > echoserver-pod-created.yaml
Compare echoserver-pod.yaml and echoserver-pod-created.yaml to see additional properties that have been added to the original pod definition.
Step 1 Define a new replication controller for 2 replicas for the pod echoserver. Create a new file echoserver-rc.yaml with the following content:
apiVersion: v1
kind: ReplicationController
metadata:
name: echoserver
spec:
replicas: 2
selector:
app: echoserver
template:
metadata:
name: echoserver
labels:
app: echoserver
spec:
containers:
- name: echoserver
image: gcr.io/google_containers/echoserver:1.4
ports:
- containerPort: 8080
Step 2 Create a new replication controller:
$ kubectl create -f echoserver-rc.yaml
replicationcontroller/echoserver created
Step 3 Use kubectl get replicationcontrollers to list replication controllers:
$ kubectl get replicationcontrollers
NAME DESIRED CURRENT READY AGE
echoserver 2 2 2 15s
Step 4 Use kubectl get pods to list pods:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
echoserver 1/1 Running 0 37m
echoserver-obzuw 1/1 Running 0 46s
echoserver-rl8kx 1/1 Running 0 46s
Step 5 Our replication controller created two new pods (replicas). The existing pod echoserver does not have the label app: echoserver, therefore it is not controlled by our replication controller. Let’s add this label the echoserver pod:
$ kubectl label pods echoserver app=echoserver
pod/echoserver labeled
Step 6 List pods:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
echoserver 1/1 Running 0 1h
echoserver-rl8kx 1/1 Running 0 1h
Step 7 Our replication controller has detected that there are three pods labeled with app: echoserver, so one pod has been stopped by the controller. Use kubectl describe to see controller events:
$ kubectl describe replicationcontroller/echoserver
Name: echoserver
Namespace: default
Image(s): gcr.io/google_containers/echoserver:1.4
Selector: app=echoserver
Labels: app=echoserver
Replicas: 2 current / 2 desired
Pods Status: 2 Running / 0 Waiting / 0 Succeeded / 0 Failed
No volumes.
Events:
... Reason Message
... -------- -------
... SuccessfulDelete Deleted pod: echoserver-obzuw
Step 8 To scale the number of replicas up we need to update the field replicas. Edit the file echoserver-rc.yaml, change the number of replicas to 3. Then use kubectl replace to update the replication controller:
$ kubectl replace -f echoserver-rc.yaml
replicationcontroller/echoserver replaced
Step 9 Use kubectl describe to check that the number of replicas has been updated in the controller:
$ kubectl describe replicationcontroller/echoserver
...
Replicas: 3 current / 3 desired
Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed
...
Step 10 Let’s check the number of pods:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
echoserver 1/1 Running 0 2h
echoserver-b5ujn 1/1 Running 0 2m
echoserver-rl8kx 1/1 Running 0 1h
You can see that the replication controller has started a new pod.
We have three running echoserver pods, but we cannot use them from our lab, because the container ports are not accessible. Let’s define a new service that will expose echoserver ports and make them accessible from the lab.
Step 1 Create a new file echoserver-service.yaml with the following content:
apiVersion: v1
kind: Service
metadata:
name: echoserver
spec:
type: "NodePort"
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
app: echoserver
Step 2 Create a new service:
$ kubectl create -f echoserver-service.yaml
service/echoserver created
Step 3 Check the service details:
$ kubectl describe services/echoserver
Name: echoserver
Namespace: default
Labels: <none>
Selector: app=echoserver
Type: NodePort
IP: ...
Port: <unset> 8080/TCP
NodePort: <unset> 31698/TCP
Endpoints: ...:8080,...:8080,..:8080
Session Affinity: None
No events.
Note that the output contains three endpoints and a port. The endpoints can be reached by IP:Port or LabIP:NodePort.
Step 4 To access a service exposed via IP (clusterIP), specify the port 8080 and the IP provided to the service from output above:
$ curl http://<IP>:8080
CLIENT VALUES:
client_address=...
command=GET
real path=/
...
Step 5 A nodePort can be used to access a service from the Internet if the node is accessible and the port is open. Check if you can access the echoserver service remotely. On the lab node get the public-ip of the node:
$ publicip=$(curl -s 169.254.169.254/2016-09-02/meta-data/public-ipv4)
$ echo $publicip
On your client machine execute:
$ curl http://<publicip>:<NodePort>
CLIENT VALUES:
client_address=...
command=GET
real path=/
...
Step 1 Before diving into Kubernetes deployment, let’s delete our service, controller, pods. To delete the service execute the following command:
$ kubectl delete service echoserver
service "echoserver" deleted
Step 2 To delete the replication controller and its pods:
$ kubectl delete replicationcontroller echoserver
replicationcontroller "echoserver" deleted
Step 3 Check that there are no running pods:
$ kubectl get pods
Note that if you want to delete just a replication controller without deleting any of its pods, use the option --cascade=false.
Step 1 The simplest way to create a new deployment for a single-container pod is to use kubectl create deployment:
$ kubectl create deployment echoserver \
--image=gcr.io/google_containers/echoserver:1.4
deployment.apps/echoserver created
Step 2 Check pods:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
echoserver-722388366-5z1bh 1/1 Running 0 1m
Step 3 To access the echoserver from the lab, create a new service using kubectl expose deployment:
$ kubectl expose deployment echoserver --type=NodePort --port=8080
service/echoserver exposed
To get the exposed port number execute:
$ kubectl describe services/echoserver | grep ^NodePort
NodePort: <unset> 30512/TCP
Remember the port number for the next step.
Step 4 Check that the echoserver is accessible:
$ curl http://localhost:<NodePort>
CLIENT VALUES:
...
Step 5 Let’s change the number of replicas in the deployment. Use kubectl edit to open an editor and change the number of replicas to 3:
$ kubectl edit deployment echoserver
# edit the deployment definition, change replicas to 3
deployment.extensions/echoserver edited
Step 6 View the deployment details:
$ kubectl describe deployment echoserver
Name: echoserver
Namespace: default
Labels: run=echoserver
Selector: run=echoserver
Replicas: 3 updated | 3 total | 3 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 1 max unavailable, 1 max surge
OldReplicaSets: <none>
NewReplicaSet: echoserver-722388366 (3/3 replicas created)
...
Step 7 Check that there are 3 running pods:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
echoserver-722388366-5p5ja 1/1 Running 0 1m
echoserver-722388366-5z1bh 1/1 Running 0 1h
echoserver-722388366-nhfb1 1/1 Running 0 1h
Step 8 Use kubectl rollout history deployment to see revisions of the deployment:
$ kubectl rollout history deployment echoserver
deployments "echoserver":
REVISION CHANGE-CAUSE
1 <none>
Step 9 Now we want to replace our echoserver with a new implementation. We want to use a new image based on alpine. Edit the deployment:
$ kubectl edit deployment echoserver
Step 10 Change the image value to:
image: alpine:3.6
Step 11 And add a new command field just after the image:
image: alpine:3.6
command: ['nc', '-p', '8080', '-lke', 'echo', '-ne', 'HTTP/1.0 200 OK\nContent-Length: 13\n\nHello World!\n']
Step 12 Check the deployment status:
$ kubectl describe deployment echoserver
...
Replicas: 3 updated | 3 total | 3 available | 0 unavailable
...
Step 13 Check that the echoserver works (use the port number from the step 3):
$ curl http://localhost:<NodePort>
Hello World!
Step 14 The deployment controller replaced all of the pods by new ones, one by one. Let’s check the revisions:
$ kubectl rollout history deployment echoserver
deployments "echoserver":
REVISION CHANGE-CAUSE
1 <none>
2 <none>
Step 15 After that, we decided that the new implementation does not work as expected (we wanted echoserver, not a hello world application). Let’s undo the last change:
$ kubectl rollout undo deployment echoserver
deployment.extensions/echoserver rolled back
Step 16 Check the deployment status:
$ kubectl describe deployment echoserver
...
Replicas: 3 updated | 3 total | 3 available | 0 unavailable
...
Step 17 Check that the echoserver works (use the port number from the step 3):
$ curl http://localhost:<NodePort>
CLIENT VALUES:
...
Step 18 Delete the deployment:
$ kubectl delete deployment echoserver
deployment.extensions "echoserver" deleted
We have successfully rolled back the deployment and our pods are based on the echoserver image again.
Step 1 Define a new job in the file myjob.yaml:
apiVersion: batch/v1
kind: Job
metadata:
name: myjob
spec:
completions: 5
parallelism: 1
template:
metadata:
name: myjob
spec:
containers:
- name: myjob
image: busybox
command: ["sleep", "20"]
restartPolicy: Never
This job is based on the image busybox. It waits 20 seconds, then exists. We requested 5 successful completions with no parallelism.
Step 2 Create a new job:
$ kubectl create -f myjob.yaml
job.batch/myjob created
Step 3 Let’s watch the job being executed and the results of each execution:
$ kubectl get jobs --watch
NAME DESIRED SUCCESSFUL AGE
myjob 5 1 1m
...
Step 4 If we’re interested in more details about the job:
$ kubectl describe jobs myjob
...
Pods Statuses: 1 Running / 0 Succeeded / 0 Failed
...
Step 5 And finally:
$ kubectl describe jobs myjob
...
Pods Statuses: 0 Running / 5 Succeeded / 0 Failed
...
Step 6 After that, the job can be deleted:
$ kubectl delete job myjob
job.batch "myjob" deleted
Checkpoint