6. Kubernetes Building Blocks

In this lab, we will learn Kubernetes basic building blocks.

Note

YAML files for this lab are located in the directory ~/k8s-examples/overview/.

Chapter Details
Chapter Goal Learn Kubernetes basic building blocks
Chapter Sections

6.1. Kubernetes Client

Step 1 Log in to the lab and check Kubernetes client (kubectl) and server versions:

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", ...
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", ...

Step 2 The kubectl tools allows connecting to multiple Kubernetes clusters. A set of parameters, for example, the address the Kubernetes API server plus the credentials is called a context. You can define several contexts and specify which context to use to connect to a specific cluster. You can also specify a default context to use. Use kubectl config view to view the current kubectl configuration:

$ kubectl config view
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: REDACTED
    server: https://<private-ip>:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED

Step 3 Use kubectl cluster-info to see a basic information about the current cluster:

$ kubectl cluster-info
Kubernetes master is running at ...
KubeDNS is running at ...

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

6.2. Explore the Cluster

The IP address your received from your Instructor is the public IP address of the master node. Your cluster contains also worker nodes.

Step 1 Use kubectl get nodes to get a list of nodes in your cluster:

$ kubectl get nodes -o wide
NAME      STATUS    ROLES     AGE       VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION   CONTAINER-RUNTIME
master    Ready     master    7h        v1.11.1   172.16.1.43    <none>        Ubuntu 16.04.2 LTS   4.4.0-1022-aws   docker://17.3.2
node1     Ready     <none>    7h        v1.11.1   172.16.1.201   <none>        Ubuntu 16.04.2 LTS   4.4.0-1022-aws   docker://17.3.2
node2     Ready     <none>    7h        v1.11.1   172.16.1.165   <none>        Ubuntu 16.04.2 LTS   4.4.0-1022-aws   docker://17.3.2

Step 2 You need to fill out the following table with IP addresses of master and worker nodes.

Public IP of the Master Node lab-ip  
Private IP of the Master Node private-ip  
Private IP of the Node1 node1-ip  
Private IP of the Node2 node2-ip  

Use the IP address your received from your Instructor as lab-ip. To get a private IP address (private-ip) of your nodes you can use the output of kubectl describe:

$ kubectl describe node | grep InternalIP -A 1
  InternalIP:  172.16.x.xxx
  Hostname:    master
--
  InternalIP:  172.16.x.xxx
  Hostname:    node1
--
  InternalIP:  172.16.x.xxx
  Hostname:    node2

Alternatively, you can use ip addr show eth0 on each node to get the same address:

$ ip addr show eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
    link/ether 06:0c:b9:2f:3c:af brd ff:ff:ff:ff:ff:ff
    inet <private-ip>/24 brd 172.16.1.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::40c:b9ff:fe2f:3caf/64 scope link
       valid_lft forever preferred_lft forever

6.3. Create a Pod

We are going to declare Kubernetes building blocks by writing YAML files containing their definitions.

Step 1 Define a new pod in the file echoserver-pod-1.yaml:

apiVersion: v1
kind: Pod
metadata:
  name: echoserver
spec:
  containers:
  - name: echoserver
    image: k8s.gcr.io/echoserver:1.4
    ports:
    - containerPort: 8080

We use the docker image echoserver, hosted at the docker registry gcr.io. This is a simple web server that responds back with the HTTP headers it receives. It runs on a nginx server and is implemented using Lua in the nginx configuration: https://github.com/kubernetes/contrib/tree/master/ingress/echoheaders

Step 2 Create the echoserver pod:

$ kubectl create -f echoserver-pod-1.yaml
pod "echoserver" created

Step 3 Use kubectl get pods to watch the pod get created:

$ kubectl get pods --watch
NAME         READY     STATUS              RESTARTS   AGE
echoserver   0/1       ContainerCreating   0          5s
echoserver   1/1       Running   0         6s

Use Ctrl-C to exit the above command.

Step 4 Get the pod definition back from Kubernetes:

$ kubectl get pod echoserver -o yaml | less

As you can see, the actual created object contains more properties than you defined in the original file. Take note that there are properties in different sections, that correspond to:

  • metadata such as namespace: default
  • declarative specification of the object (spec) such as restartPolicy: Always, dnsPolicy: ClusterFirst, serviceAccount: default
  • object’s actual state (status), such as phase: Running or podIP: 192.168.1.x

Step 5 Now let’s execute a command in the application container to make sure our echoserver application works:

$ kubectl -it exec echoserver /bin/bash
root@echoserver:/#

We ran a new interactive shell session in the container using command line options -i. The -t options indicates that the stdin is a TTY.

Let’s see if the application responds on port 8080:

root@echoserver:/# curl http://127.0.0.1:8080
CLIENT VALUES:
client_address=127.0.0.1
command=GET
real path=/
query=nil
request_version=1.1
request_uri=http://127.0.0.1:8080/

SERVER VALUES:
server_version=nginx: 1.10.0 - lua: 10001

HEADERS RECEIVED:
accept=*/*
host=127.0.0.1:8080
user-agent=curl/7.47.0
BODY:
...

We have verified that the application works on the port 8080. Now you can close the interactive session using exit or by pressing Ctrl-D:

root@echoserver:/# exit
exit
stack@master:~$

6.4. Attach a Label

A label is just a key=value pair that is attached to a Kubernetes building block such as a Pod. We can attach a label to a building block at creation time, or add, remove, or modify a label at any later time.

Step 1 Define a new pod in the file echoserver-pod-2.yaml (also available in ~/k8s-examples/overview/). The new pod is similar to our previous pod, but has a new name echoserver2, and added a new labels key with { mylabel1: value1 } as its value.

The name and labels definitions are both attributes of the object’s metadata:

apiVersion: v1
kind: Pod
metadata:
  name: echoserver2     # Change 'echoserver' to 'echoserver2'
  labels:               # Add a new 'labels' key
    mylabel1: value1    # Add a new 'mylabel1' label
spec:
  containers:
  - name: echoserver
    image: k8s.gcr.io/echoserver:1.4
    ports:
    - containerPort: 8080

Step 2 Create a new Pod:

$ kubectl create -f echoserver-pod-2.yaml
pod "echoserver2" created

Step 3 Check that the label is set for the newly created pod:

$ kubectl get pods echoserver2 --show-labels
NAME          READY     STATUS    RESTARTS   AGE       LABELS
echoserver2   1/1       Running   0          25s       mylabel1=value1

You can also use a JSONPath (http://goessner.net/articles/JsonPath/) expression to just get the pod’s labels:

$ kubectl get pod echoserver2 -o jsonpath='{.metadata.labels}{"\n"}'
map[mylabel1:value1]

Step 4 Attach one more label mylabel2 to the same pod:

$ kubectl label pod echoserver2 mylabel2=value2
pod "echoserver2" labeled

Check that our pod has two labels:

$ kubectl get pod echoserver2 -o jsonpath='{.metadata.labels}{"\n"}'
map[mylabel2:value2 mylabel1:value1]

Step 5 To change the value for an existing label we use the --overwrite command line option:

$ kubectl label pod echoserver2 mylabel1=value2 --overwrite
pod "echoserver2" labeled

Check our pod’s labels:

$ kubectl get pod echoserver2 -o jsonpath='{.metadata.labels}{"\n"}'
map[mylabel2:value2 mylabel1:value2]

Step 6 Delete an existing label:

$ kubectl label pod echoserver2 mylabel2-
pod "echoserver2" labeled

Check our pod’s labels:

$ kubectl get pod echoserver2 --show-labels
NAME          READY     STATUS    RESTARTS   AGE       LABELS
echoserver2   1/1       Running   0          23m       mylabel1=value2

Step 7 We no longer need our echoserver2 pod, so let’s delete it:

$ kubectl delete pod echoserver2
pod "echoserver2" deleted

6.5. Attach an Annotation

An annotation, just like a label, is a key=value pair that is attached to a Kubernetes building block, such as a Pod. However, unlike labels, annotations are not queryable and you cannot use them to identify and select building blocks.

Step 1 Attach a new annotation to the existing pod echoserver:

$ kubectl annotate pod echoserver description="A simple echoserver application"
pod "echoserver" annotated.

Check that our pod has the newly attached annotation:

$ kubectl get pod echoserver -o jsonpath="{.metadata.annotations.description}{'\n'}"
A simple echoserver application

Step 2 Change the existing annotation:

$ kubectl annotate pod echoserver --overwrite description="A simple echoserver application v1.4"
pod "echoserver" annotated.

Check that the annotation has been updated:

$ kubectl get pod echoserver -o jsonpath="{.metadata.annotations.description}{'\n'}"
A simple echoserver application v1.4

Step 3 Delete the existing annotation:

$ kubectl annotate pod echoserver description-
pod "echoserver" annotated

Check that the annotation has been deleted:

$ kubectl get pod echoserver -o jsonpath="{.metadata.annotations}{'\n'}"
map[cni.projectcalico.org/podIP:192.168.1.2/32]

6.6. Create a Replication Controller

Step 1 Define a new replication controller for 2 replicas of an echoserver pod. Create the file echoserver-rc.yaml with the following content:

apiVersion: v1
kind: ReplicationController
metadata:
  name: echoserver
spec:
  replicas: 2
  selector:
    app: echoserver
  template:
    metadata:
      name: echoserver
      labels:
        app: echoserver
    spec:
      containers:
      - name: echoserver
        image: k8s.gcr.io/echoserver:1.4
        ports:
        - containerPort: 8080

Note that we use a label app in the pod’s template and the same label in the replication controller’s selector.

Step 2 Create a new replication controller:

$ kubectl create -f echoserver-rc.yaml
replicationcontroller "echoserver" created

Step 3 Use kubectl get replicationcontrollers to list replication controllers:

$ kubectl get rc -o wide
NAME         DESIRED   CURRENT   READY     AGE       CONTAINERS   IMAGES                      SELECTOR
echoserver   2         2         2         1m        echoserver   k8s.gcr.io/echoserver:1.4   app=echoserver

Notes

We used the shorthand name ‘rc’ instead of ‘replicationcontrollers’. kubectl commands allow the for the abbreviated name. For more information see: https://kubernetes.io/docs/reference/kubectl/overview/#resource-types

Step 4 Use kubectl get pods to list pods:

$ kubectl get pods -o wide --show-labels
NAME               READY     STATUS    RESTARTS   AGE       IP            NODE      LABELS
echoserver         1/1       Running   0          1h        192.168.1.2   node1     <none>
echoserver-jltx6   1/1       Running   0          5m        192.168.2.4   node2     app=echoserver
echoserver-kvdxc   1/1       Running   0          5m        192.168.1.3   node1     app=echoserver

Step 5 Our replication controller created two new pods (replicas). The existing pod echoserver does not have the label app: echoserver, therefore it is not controlled by our replication controller. Let’s add this label the echoserver pod:

$ kubectl label pods echoserver app=echoserver
pod "echoserver" labeled

Step 6 List pods:

$ kubectl get pods -o wide --show-labels
NAME               READY     STATUS    RESTARTS   AGE       IP            NODE      LABELS
echoserver         1/1       Running   0          1h        192.168.1.2   node1     app=echoserver
echoserver-kvdxc   1/1       Running   0          6m        192.168.1.3   node1     app=echoserver

Step 7 Our replication controller has detected that there are three pods labeled with app: echoserver, so one pod has been stopped by the controller. Use kubectl describe to see controller events:

$ kubectl describe rc/echoserver
Name:         echoserver
Namespace:    default
Selector:     app=echoserver
Labels:       app=echoserver
Annotations:  <none>
Replicas:     2 current / 2 desired
Pods Status:  2 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=echoserver
  Containers:
   echoserver:
    Image:        k8s.gcr.io/echoserver:1.4
    Port:         8080/TCP
    Host Port:    0/TCP
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Events:
  Type    Reason            Age   From                    Message
  ----    ------            ----  ----                    -------
  Normal  SuccessfulCreate  18m   replication-controller  Created pod: echoserver-jltx6
  Normal  SuccessfulCreate  18m   replication-controller  Created pod: echoserver-kvdxc
  Normal  SuccessfulDelete  12m   replication-controller  Deleted pod: echoserver-jltx6

Notes

We used rc/echoserver to refer to our replication controller. kubectl allows a / as a separator in lieu of a <space>.

Step 8 To scale the number of replicas up we need to update the field replicas. Edit the file echoserver-rc.yaml, change the number of replicas to 3:

$ vim echoserver-rc.yaml
...
spec:
  replicas: 3
...

Step 9 Then use kubectl replace to update the replication controller:

$ kubectl replace -f echoserver-rc.yaml
replicationcontroller "echoserver" replaced

Step 9 Use kubectl describe to check that the number of replicas has been updated in the controller:

$ kubectl describe replicationcontroller echo
...
Replicas:   3 current / 3 desired
Pods Status:        3 Running / 0 Waiting / 0 Succeeded / 0 Failed
...

Notes

kubectl describe allows an object prefix, such as echo to be used in place of the object’s fullname. For example, kubectl describe pod echo | grep -w Name returns the name of all 3 echoserver pods.

Step 10 Let’s check the number of pods:

$ kubectl get pods -o wide
NAME               READY     STATUS    RESTARTS   AGE       IP                NODE
echoserver         1/1       Running   0          25m       192.168.104.1     node2
echoserver-pmkk5   1/1       Running   0          3m        192.168.104.2     node2
echoserver-s5dv2   1/1       Running   0          5s        192.168.166.131   node1

You can see that the replication controller has started a new pod.

6.7. Create a Service

We have three running echoserver pods accessible at their IP address, but pods are ephemeral so the IP addresses are not stable. Let’s define a new service that will expose echoserver application and provides a stable IP address for all the pods in the application.

Step 1 Create a new file echoserver-service.yaml with the following content:

apiVersion: v1
kind: Service
metadata:
  name: echoserver
spec:
  type: "NodePort"
  ports:
    - port: 80
      protocol: TCP
      targetPort: 8080
  selector:
    app: echoserver

Step 2 Create a new service:

$ kubectl create -f echoserver-service.yaml
service "echoserver" created

Step 3 Check the service details:

$ kubectl describe services/echoserver
Name:                     echoserver
Namespace:                default
Labels:                   <none>
Annotations:              <none>
Selector:                 app=echoserver
Type:                     NodePort
IP:                       10.103.4.100
Port:                     <unset>  80/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32339/TCP
Endpoints:                192.168.1.2:8080,192.168.1.3:8080,192.168.2.5:8080
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

Note that the output contains three Endpoints, an IP address, and a NodePort. The IP and NodePort values in the output above can be different in your case.

Step 4 Our application can be accessed through the Service IP address and Service Port:

$ curl 10.103.4.100:80
CLIENT VALUES:
client_address=...
command=GET
real path=/
...

The service IP address acts as stable virtual IP address (VIP) and load-balancer for the application.

Step 5 To access a service exposed via a node port, specify the node port from the previous step:

$ curl http://localhost:<nodeport>
CLIENT VALUES:
client_address=...
command=GET
real path=/
...

NodePort simply forwards a specific port on all nodes to the service IP address and port.

6.8. Delete a Service, Controller, Pod

Step 1 Before diving into Kubernetes deployment, let’s delete our service, controller, pods. Let’s check the resources in our namespace:

$ kubectl get all
NAME                   READY     STATUS    RESTARTS   AGE
pod/echoserver-76zh4   1/1       Running   0          16s
pod/echoserver-h5xz7   1/1       Running   0          16s
pod/echoserver-wml6l   1/1       Running   0          16s

NAME                               DESIRED   CURRENT   READY     AGE
replicationcontroller/echoserver   3         3         3         16s

NAME                 TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
service/echoserver   NodePort    10.110.130.102   <none>        80:30128/TCP   6s
service/kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP        11h

Step 2 To delete the service execute the following command:

$ kubectl delete service echoserver
service "echoserver" deleted

Step 3 To delete the replication controller and its pods:

$ kubectl delete replicationcontroller echoserver
replicationcontroller "echoserver" deleted

Step 4 Check that there are no running pods:

$ kubectl get all
NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   11h

Notes

The option --cascade=false can be used to delete just a replication controller without deleting any of its pods.

6.9. Create a Deployment

Deployment and ReplicaSet are the next generation replacement for ReplicationController. Deployment adds rolling-update semantics for applications. ReplicaSet adds set-bases selectors.

Step 1 Create a new file echoserver-deployment.yaml with the following content:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: echoserver
spec:
  replicas: 2
  selector:
    matchLabels:
      app: echoserver
  template:
    metadata:
      labels:
        app: echoserver
    spec:
      containers:
      - name: echoserver
        image: k8s.gcr.io/echoserver:1.4
        ports:
        - containerPort: 8080

Step 2 Use the file echoserver-deployment.yaml to create a new deployment and check that a new deployment and pods have been created:

$ kubectl create -f echoserver-deployment.yaml
deployment "echoserver" created

$ kubectl get all
NAME                             READY     STATUS    RESTARTS   AGE
pod/echoserver-bdd6c7cfc-lgdws   1/1       Running   0          22s
pod/echoserver-bdd6c7cfc-mtzpx   1/1       Running   0          22s

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   11h

NAME                         DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/echoserver   2         2         2            2           22s

NAME                                   DESIRED   CURRENT   READY     AGE
replicaset.apps/echoserver-bdd6c7cfc   2         2         2         22s

Note that names of the pods contain two IDs (they can be different in your case). Note that the replica set’s name contains the same ID as names of the pods from the output above.

Step 3 Use the same service definition in the echoserver-service.yaml file to create a service for the deployment:

$ kubectl create -f echoserver-service.yaml
service "echoserver" created

To get the exposed port number execute:

$ kubectl get service echoserver
NAME         CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
echoserver   10.106.84.243   <nodes>       8080:30625/TCP   1m

You also can use JSONPath the get the exposed port number:

$ nodeport=$(kubectl get service echoserver -o jsonpath='{.spec.ports[0].nodePort}')

The port number can be different in your case, remember it for the next step.

Step 4 Check that the echoserver is accessible:

$ curl http://localhost:$nodeport
CLIENT VALUES:
...

Step 5 Let’s change the number of replicas in the deployment. Use kubectl edit to open an editor and change the number of replicas to 3:

$ kubectl edit deployment echoserver
# edit the deployment definition, change replicas to 3
deployment "echoserver" edited

Step 6 View the deployment details:

$ kubectl describe deployment echoserver
Name:                   echoserver
Namespace:              default
Labels:                 run=echoserver
Selector:               run=echoserver
Replicas:               3 updated | 3 total | 3 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  1 max unavailable, 1 max surge
OldReplicaSets:                 <none>
NewReplicaSet:                  echoserver-722388366 (3/3 replicas created)
...

Step 7 Check that there are 3 running pods:

$ kubectl get pods
NAME                         READY     STATUS    RESTARTS   AGE
echoserver-bdd6c7cfc-cpt6w   1/1       Running   0          55s
echoserver-bdd6c7cfc-lgdws   1/1       Running   0          27m
echoserver-bdd6c7cfc-mtzpx   1/1       Running   0          27m

Step 8 Use kubectl rollout history deployment to see revisions of the deployment:

$ kubectl rollout history deployment echoserver
deployments "echoserver":
REVISION    CHANGE-CAUSE
1           <none>

Step 9 Now we want to replace our echoserver with another implementation. Edit the deployment:

$ kubectl edit deployment echoserver

Step 10 Find the image key and change its value to alpine:3.6:

image: alpine:3.6

Then add a new command key just after the image:

image: alpine:3.6
command: ['nc', '-p', '8080', '-lke', 'echo', '-ne', 'HTTP/1.0 200 OK\nContent-Length: 13\n\nHello World!\n']

Save the file,

Step 11 Check the deployment status:

$ kubectl describe deployment echoserver
...
Replicas:           3 updated | 3 total | 3 available | 0 unavailable
...

Step 12 Check that the echoserver works (use the port number from the step 3):

$ curl http://localhost:$nodeport
Hello World!

Step 13 The deployment controller replaced all of the pods by new ones, one by one. Let’s check the revisions:

$ kubectl rollout history deployment echoserver
deployments "echoserver":
REVISION    CHANGE-CAUSE
1           <none>
2           <none>

Step 14 After that, we decided that the new implementation does not work as expected (we wanted echoserver, not a hello world application). Let’s undo the last change:

$ kubectl rollout undo deployment echoserver
deployment "echoserver" rolled back

Notes

If you edited the deployment more than once, you can specify a revision number to rollback to by specifying –to-revision parameter

Step 15 Check the deployment status:

$ kubectl describe deployment echoserver
...
Replicas:           3 updated | 3 total | 3 available | 0 unavailable
...
OldReplicaSets:  <none>
NewReplicaSet:   echoserver-bdd6c7cfc (3/3 replicas created)
...

Step 16 Check that the echoserver works (use the port number from the step 3):

$ curl http://localhost:$nodeport
CLIENT VALUES:
...

Step 17 The kubectl rollout undo actually creates a new revision. So doing another undo will undo the previous undo:

$ kubectl rollout history deployment echoserver
deployments "echoserver"
REVISION  CHANGE-CAUSE
2         <none>
3         <none>

$ kubectl rollout undo deployment echoserver
deployment.apps "echoserver"

$ kubectl rollout history deployment echoserver
deployments "echoserver"
REVISION  CHANGE-CAUSE
3         <none>
4         <none>

$ curl http://localhost:$nodeport
Hello World!

Step 18 Delete the service and deployment:

$ kubectl delete service,deployment echoserver
service "echoserver" deleted
deployment "echoserver" deleted

6.10. Shortcuts for a Deployment and Service

Step 1 The simpler way to create a new deployment for a single-container pod is to use kubectl run:

$ kubectl run echoserver \
--image=k8s.gcr.io/echoserver:1.5 \
--port=8080 \
--replicas=2
deployment "echoserver" created

Check that a new deployment and pods have been created:

$ kubectl get deployments
NAME         DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
echoserver   2         2         2            2           9s

$ kubectl get pods
NAME                         READY     STATUS    RESTARTS   AGE
echoserver-722388366-5z1bh   1/1       Running   0          1m
echoserver-722388366-nhfb1   1/1       Running   0          1m

Step 2 To make echoserver accessible from the lab, create a new service using kubectl expose deployment:

$ kubectl expose deployment echoserver --port=80 --target-port=8080 --type=NodePort
service "echoserver" exposed

Do you remember how to view the host port exposed by the service? Find this port and test the echoserver on your own before proceeding.

Step 3 Delete the service and deployment:

$ kubectl delete service,deployment echoserver
service "echoserver" deleted
deployment "echoserver" deleted

Step 4 You can create a new deployment for a single-container pod and expose its port at the same time using the following command (note the --expose command line option):

$ kubectl run echoserver \
--image=k8s.gcr.io/echoserver:1.5 \
--port=8080 \
--expose \
--replicas=2
service "echoserver" created
deployment "echoserver" created

Check that a new deployment, pods, and service have been created:

$ kubectl get deployments
NAME         DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
echoserver   2         2         2            2           9s

$ kubectl get pods
NAME                         READY     STATUS    RESTARTS   AGE
echoserver-722388366-5z1bh   1/1       Running   0          1m
echoserver-722388366-nhfb1   1/1       Running   0          1m

Step 5 Check the service created with kubectl run command:

$ kubectl get service echoserver
NAME         CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
echoserver   10.98.34.100   <none>        8080/TCP   2m

As you see, this is a ClusterIP service. To cha make a NodePort, edit the service and change its type from ClusterIP to NodePort:

$ kubectl edit service echoserver
# Edit the opened file, change `type: ClusterIP` to `type: NodePort`
service "echoserver" edited

Check the service again:

$ kubectl get service echoserver
NAME         CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
echoserver   10.98.34.100   <nodes>       8080:30094/TCP   6m

Now you can access the echoserver application on nodes port 30094 (can be different in your case).

6.11. Service Discovery

Let’s check how service discovery works.

Step 1 Start a new container based on the busybox image:

$ kubectl run --rm -it busybox --image=k8s.gcr.io/busybox --command /bin/sh
If you don't see a command prompt, try pressing enter.
/ #

Step 2 We will use a service discovery technique based on environment variables. Since our service has name echoserver we can use the ECHOSERVER_SERVICE_HOST and ECHOSERVER_SERVICE_PORT variables:

/ # printenv
KUBERNETES_PORT=tcp://10.96.0.1:443
KUBERNETES_SERVICE_PORT=443
HOSTNAME=busybox-75b8b588f4-7gxzd
SHLVL=1
HOME=/root
ECHOSERVER_PORT_8080_TCP_ADDR=10.99.221.94
ECHOSERVER_SERVICE_HOST=10.99.221.94
ECHOSERVER_PORT_8080_TCP_PORT=8080
ECHOSERVER_PORT_8080_TCP_PROTO=tcp
TERM=xterm
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
ECHOSERVER_PORT=tcp://10.99.221.94:8080
ECHOSERVER_SERVICE_PORT=8080
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
ECHOSERVER_PORT_8080_TCP=tcp://10.99.221.94:8080
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
KUBERNETES_SERVICE_PORT_HTTPS=443
PWD=/
KUBERNETES_SERVICE_HOST=10.96.0.1

/ # wget -qO - http://${ECHOSERVER_SERVICE_HOST}:${ECHOSERVER_SERVICE_PORT}
CLIENT VALUES:
...

Step 3 We will use a service discovery technique based on DNS. Since our service has name echoserver we can use the this name as a local host name to access the service:

/ # wget -qO - http://echoserver:8080
CLIENT VALUES:
...

We also can use a fully qualified domain name: <service-name>.<namespace-name>.svc.cluster.local. This allows applications to use services from other namespaces:

/ # wget -qO - http://echoserver.default.svc.cluster.local:8080
CLIENT VALUES:
...

Step 4 Exit from the busybox container: enter exit or press Ctrl-D. The busybox deployment and pod will be automatically deleted (we used the --rm command line option).

Step 5 We will no longer use our service and deployment, let’s delete them:

$ kubectl delete service,deployment echoserver
service "echoserver" deleted
deployment "echoserver" deleted

6.12. Create a Stateful Set

Step 1 Create a new file echoserver-ss.yaml with two declarations:

apiVersion: v1
kind: Service
metadata:
  name: echoserver
  labels:
    app: echoserver
spec:
  ports:
  - port: 80
  clusterIP: None
  selector:
    app: echoserver
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: echoserver
spec:
  serviceName: echoserver
  replicas: 3
  selector:
    matchLabels:
      app: echoserver
  template:
    metadata:
      labels:
        app: echoserver
    spec:
      terminationGracePeriodSeconds: 10
      containers:
      - name: echoserver
        image: k8s.gcr.io/echoserver:1.4
        ports:
        - containerPort: 8080

The first declaration is for the headless (clusterIP: None) service echoserver, which is required for stable, unique network identifiers for the pods. The second declaration is for stateful set. Statefulsets must be aware of their serviceName, and the service must be created before the statefulset.

Step 2 Open a new terminal and log in to the lab again (you also can use tmux or screen to open a new window in the same SSH session). In a new terminal, run the following command to watch pods:

$ kubectl get pods -w -o wide

Keep this command running, it is expected that no output is being displayed currently.

Step 3 Return to the first terminal and create a new service and stateful set:

$ kubectl create -f echoserver-ss.yaml
service "echoserver" created
statefulset "echoserver" created

In the second terminal, check that pods are created in a strict order (0,1,2):

$ kubectl get pod -o wide --watch
NAME           READY     STATUS               RESTARTS IP       NODE
...
echoserver-0   0/1       ContainerCreating    0        <none>   node2
echoserver-0   1/1       Running              0        ...      node2
...
echoserver-1   0/1       ContainerCreating    0        <none>   node1
echoserver-1   1/1       Running              0        ...      node1
...
echoserver-2   0/1       ContainerCreating    0        <none>   node2
echoserver-2   1/1       Running              0        ...      node2

Step 4 Exit the watch command by pressing ctrl + C. Execute hostname in the pods to make sure, that they have corresponding names:

$ kubectl exec echoserver-0 -- hostname
echoserver-0

$ kubectl exec echoserver-1 -- hostname
echoserver-1

$ kubectl exec echoserver-2 -- hostname
echoserver-2

Step 5 kubectl run --restart=Never can be used to launch a pod, rather than a deployment. Start a new pod based on the busybox image to run a command once:

$ kubectl run --rm -it busybox --image=gcr.io/google-containers/busybox --restart=Never --command nslookup echoserver
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      echoserver
Address 1: 192.168.104.1 echoserver-0.echoserver.default.svc.cluster.local
Address 2: 192.168.166.129 echoserver-1.echoserver.default.svc.cluster.local
Address 3: 192.168.166.130 echoserver-2.echoserver.default.svc.cluster.local

Your ip addresses may be different.

Step 6 In the second terminal Watch the pods again before deleting the stateful set:

$ kubectl get pods -w -o wide

Return to the first terminal and delete our stateful set and its associated service:

$ kubectl delete -f echoserver-ss.yaml
service "echoserver" deleted
statefulset "echoserver" deleted

In the second terminal, check that pods are deleted in a strict order (2,1,0):

NAME           READY     STATUS               RESTARTS IP       NODE
...
echoserver-2   1/1       Terminating          0        ...      node2
...
echoserver-1   1/1       Terminating          0        ...      node1
...
echoserver-0   1/1       Terminating          0        ...      node2

Close the second terminal.

6.13. Create a Job

6.13.1. Non-parallel Job

Step 1 Define a new job in the file myjob-1.yaml:

apiVersion: batch/v1
kind: Job
metadata:
  name: myjob
spec:
  completions: 5
  parallelism: 1
  template:
    metadata:
      name: myjob
    spec:
      containers:
      - name: myjob
        image: busybox
        command: ["sleep", "20"]
      restartPolicy: Never

This job is based on the image busybox. It waits 20 seconds, then exists. We requested 5 successful completions with no parallelism.

Step 2 Create a new job:

$ kubectl create -f myjob-1.yaml
job "myjob" created

Step 3 Let’s watch the job being executed and the results of each execution:

$ kubectl get jobs --watch
NAME      DESIRED   SUCCESSFUL   AGE
myjob     5         1            1s
...
myjob     5         5            1m

Press Ctrl-C to stop watching the job after 5 successful completions.

Step 4 To get more details about the job you can use kubectl describe job, for example:

$ kubectl describe job myjob | grep "Pods Statuses"
Pods Statuses:      0 Running / 5 Succeeded / 0 Failed

Step 5 After that, the job can be deleted:

$ kubectl delete job myjob
job "myjob" deleted

6.13.2. Parallel Job

Step 1 Define a new job in the file myjob-2.yaml:

apiVersion: batch/v1
kind: Job
metadata:
  name: myjob
spec:
  completions: 5
  parallelism: 5
  template:
    metadata:
      name: myjob
    spec:
      containers:
      - name: myjob
        image: busybox
        command: ["sleep", "20"]
      restartPolicy: Never

The only difference with the previous job declaration is parallelism: 5 instead of parallelism: 1.

Step 2 Create a new job:

$ kubectl create -f myjob-2.yaml
job "myjob" created

Step 3 Let’s watch the job being executed and the results of each execution:

$ kubectl get jobs --watch
NAME      DESIRED   SUCCESSFUL   AGE
myjob     5         1            1s
...
myjob     5         5            23s

Notice the AGE of completion, desired number of jobs run in parallel and finish quicker than the previous job without paralellism. Press Ctrl-C to stop watching the job after 5 successful completions.

Step 4 To get more details about the job you can use kubectl describe job, for example:

$ kubectl describe job myjob | grep "Pods Statuses"
Pods Statuses:      0 Running / 5 Succeeded / 0 Failed

Step 5 After that, the job can be deleted:

$ kubectl delete job myjob
job "myjob" deleted

6.13.3. Cron Job

One CronJob object is like one line of a crontab (cron table) file. It runs a job periodically on a given schedule, written in Cron format.

Step 1 Define the CronJob template the file cronjob.yaml:

apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: hello
spec:
  schedule: "*/1 * * * *"
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: hello
            image: busybox
            args:
            - /bin/sh
            - -c
            - date; echo Hello from the Kubernetes cluster
          restartPolicy: OnFailure

Step 2 Create a CronJob:

$ kubectl create -f cronjob.yaml
cronjob "hello" created

Step 3 Let’s watch the job being executed and the results of each cron execution every minute:

$ kubectl get jobs --watch
NAME        DESIRED   SUCCESSFUL      AGE
hello-xxxx    1           0            1s
...
hello-xxxx    1           1            1s

Step 4 Use kubectl get pods --show-all to see the completed pods:

$ kubectl get pods --show-all
NAME             READY     STATUS    RESTARTS   AGE
hello-xxxx-yyyy   0/1      Completed   0          15s

Step 5 Use kubectl logs to see the completed pods output:

$ kubectl logs hello-xxxx-yyyy
Sun Mar  4 11:30:02 UTC 2018
Hello from the Kubernetes cluster

Step 6 Delete the CronJob:

$ kubectl delete -f cronjob.yaml
cronjob "hello" deleted

Step 7 A check for pods or jobs that were created by the CronJob shows that they were garbage collected:

$ kubectl get job,pod -a
No resources found.

6.14. Create a Daemon Set

A Daemon Set ensures that all or some nodes run a copy of a Pod. It tracks the addition and removal of cluster nodes: adds pods for nodes that are added to the cluster, and terminates pods on nodes that are being removed from the cluster.

Step 1 Define a new daemon set in the file daemonset.yaml:

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: echoserver
  labels:
    name: echoserver
spec:
  selector:
    matchLabels:
      name: echoserver
  template:
    metadata:
      labels:
        name: echoserver
    spec:
      containers:
      - name: echoserver
        image: k8s.gcr.io/echoserver:1.4
        ports:
        - containerPort: 8080

Step 2 Create a new daemon set:

$ kubectl create -f daemonset.yaml
daemonset "echoserver" created

Step 3 Check the pods started by our daemon set:

$ kubectl get pods -o wide
NAME               READY     STATUS    RESTARTS   AGE       IP    NODE
echoserver-...     1/1       Running   0          5m        ...   node2
echoserver-...     1/1       Running   0          5m        ...   node1

As you see, each worker node has one replica of the echoserver pod. Note that there is no replica running on the master node. We will discuss the reason for that later in the section 10.4. Taints and Tolerations.

Step 4 Remove the daemon set and check that there are no pod’s replicas running:

$ kubectl delete ds echoserver
daemonset "echoserver" deleted

$ kubectl get pods
No resources found.

Checkpoint

  • Use Kubernetes CLI tool
  • Create a pod, label, annotation, replication controller, service, deployment, stateful set, job, daemon set