In this lab, we will learn several Kubernetes best practices to build, deploy and manage containers applications.
Chapter Details | |
---|---|
Chapter Goal | Learn Kubernetes best practices |
Chapter Sections |
In this Chapter, we will install the existing multi-container application.
Step 1 Create a new Kubernetes namespace to isolate application resources, and create a new context for use:
$ kubectl create namespace sock-shop
namespace "sock-shop" created
$ kubectl config set-context sock-shop --cluster=kubernetes --user=kubernetes-admin --namespace=sock-shop
Context "sock-shop" created.
$ kubectl config use-context sock-shop
Switched to context "sock-shop".
Step 2 Use kubectl apply to launch the application in the YAML file containing multiple definition of your building blocks. If a resource does not exist kubectl apply acts as kubectl create but it also adds an annotation which acts as a version identifier. Every subsequent apply will do a three-way diff between the previous configuration, the provided input and the current configuration of the resource, in order to determine how to modify the resource. In this way you can keep your configuration under source control, and continually apply/push your configuration changes to your deployed application.
$ kubectl apply -f ~/k8s-examples/apps/sock-shop/complete-demo.yaml
deployment "carts-db" created
service "carts-db" created
deployment "carts" created
service "carts" created
deployment "catalogue-db" created
service "catalogue-db" created
deployment "catalogue" created
service "catalogue" created
deployment "front-end" created
service "front-end" created
deployment "orders-db" created
service "orders-db" created
deployment "orders" created
service "orders" created
deployment "payment" created
service "payment" created
deployment "queue-master" created
service "queue-master" created
deployment "rabbitmq" created
service "rabbitmq" created
deployment "shipping" created
service "shipping" created
deployment "user-db" created
service "user-db" created
deployment "user" created
service "user" created
Step 3 It takes several minutes to download and start all the containers, watch the output of kubectl get pods to see when they’re all up and running. You can use -o wide option to see how the pods are assigned to nodes:
$ kubectl get pods -w -o wide
NAME READY STATUS RESTARTS IP NODE
carts-2469883122-1jq33 1/1 Running 0 192.168.166.129 node1
carts-db-1721187500-63cs8 1/1 Running 0 192.168.104.1 node2
catalogue-4293036822-f42jw 1/1 Running 0 192.168.166.130 node1
catalogue-db-1846494424-rcc2p 1/1 Running 0 192.168.104.2 node2
front-end-2337481689-hch6k 1/1 Running 0 192.168.104.3 node2
orders-733484335-txrj6 1/1 Running 0 192.168.104.4 node2
orders-db-3728196820-0jq8s 1/1 Running 0 192.168.166.131 node1
payment-3050936124-g15tc 1/1 Running 0 192.168.166.132 node1
queue-master-2067646375-ff3nf 1/1 Running 0 192.168.104.5 node2
rabbitmq-241640118-xftj8 1/1 Running 0 192.168.166.133 node1
shipping-2463450563-xmjc9 1/1 Running 0 192.168.166.134 node1
user-1574605338-kl8xp 1/1 Running 0 192.168.104.6 node2
user-db-3152184577-r9czh 1/1 Running 0 192.168.166.135 node1
Note that pods’ names may be different in your lab. Note that IP addresses and node assignments may be different in your lab.
Press ctrl + C to exit the watch command.
Step 4 To find out the port allocated for the front-end service, run the following command:
$ kubectl get service front-end
NAME CLUSTER-IP EXTERNAL-IP PORT(S)
front-end 10.110.140.68 <nodes> 80:30001/TCP
As you see, the port 30001 (may be different in your lab) is allocated for the front-end service.
Step 5 Use the public IP address of the master node and the port number from previous step, open your web browser and go to the following URL:
http://<lab-ip>:<front-end-port>
Step 6 Explore how the application works, and what containers and building block it has. Finally, you can remove the application namespace, and switch back to the kubernetes-admin context. This will also delete all the application building blocks in the namespace:
$ kubectl delete namespace sock-shop
namespace "sock-shop" deleted
$ kubectl config use-context kubernetes-admin@kubernetes
Switched to context "kubernetes-admin@kubernetes".
Kubernetes allows using more than one container in a Pod to ensure data locality (containers in a Pod run in a “logical host”) and to make it possible to manage several tightly coupled application containers as a single unit. In this section, we will learn how to run more than one container in a single pod and how these containers can communicate with each other. You can find the files we use in this section in the directory ~/k8s-examples/mc/.
Containers in a Pod share the same IPC namespace, which means they can also communicate with each other using standard inter-process communications such as SystemV semaphores or POSIX shared memory.
In the following example, we define a Pod with two containers. We use the same Docker image for both. The first container, producer, creates a standard Linux message queue, writes a number of random messages, and then writes a special exit message. The second container, consumer, opens that same message queue for reading and reads messages until it receives the exit message. We also set the restart policy to ‘Never’, so the Pod stops after termination of both containers.
Step 1 Create a new YAML file mc2.yaml with the following content:
apiVersion: v1
kind: Pod
metadata:
name: mc2
spec:
containers:
- name: producer
image: allingeek/ch6_ipc
command: ["./ipc", "-producer"]
- name: consumer
image: allingeek/ch6_ipc
command: ["./ipc", "-consumer"]
restartPolicy: Never
Step 2 Create a new Pod using the definition in the mc2.yaml file and watch it status:
$ kubectl create -f mc2.yaml && kubectl get pods --show-all -w
NAME READY STATUS RESTARTS
mc2 0/2 Pending 0
mc2 0/2 ContainerCreating 0
mc2 0/2 Completed 0
Press Ctrl-C to stop watching the Pod. The --show-all flag displays completed/not running pods.
Step 3 You can check logs for each container and verify that the 2nd container received all messages from the 1st container, including the exit message:
$ kubectl logs mc2 -c producer --tail 5
Produced: d7
Produced: 81
Produced: c2
Produced: 59
Produced: e5
$ kubectl logs mc2 -c consumer --tail 5
Consumed: 81
Consumed: c2
Consumed: 59
Consumed: e5
Consumed: done
In Kubernetes containers in a Pod are assumed to start in parallel, and there is no mechanism to define container dependency or startup order. For example, in the IPC example, there is some chance that the second container tries to read the queue before the first one has had a chance to create it. In this case, the second container will fail, because it expects the message queue to already exist. One way to to fix this race-condition would be to change the consumer to wait for the message queue to be created, another would be to use Init Containers, which we will look at later, to create the queue.
Step 4 Clean up the cluster, remove the mc2 Pod:
$ kubectl delete pod mc2
pod "mc2" deleted
Containers in a Pod are accessible via the “localhost” interface because they all use the same network namespace. Also, for containers, the observable host name is a Pod’s name. Because containers share the same IP address and port space, you should use different ports in containers for incoming connections. In other words, applications in a Pod must coordinate their usage of ports.
In the following example, we will create a multi-container Pod where nginx in one container works as a reverse proxy for a simple web application running in the second container.
Step 1 Create a new YAML file mc3-nginx-conf.yaml with the following content:
apiVersion: v1
kind: ConfigMap
metadata:
name: mc3-nginx-conf
data:
nginx.conf: |-
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
upstream webapp {
server 127.0.0.1:5000;
}
server {
listen 80;
location / {
proxy_pass http://webapp;
proxy_redirect off;
}
}
}
This Config Map contains a configuration file for nginx.
Step 2 Create the mc3-nginx-conf Config Map:
$ kubectl create -f mc3-nginx-conf.yaml
configmap "mc3-nginx-conf" created
Step 3 Define a multi-container Pod with the simple web app and nginx in separate containers. Create a new YAML file mc3.yaml with the following content:
apiVersion: v1
kind: Pod
metadata:
name: mc3
labels:
app: mc3
spec:
containers:
- name: webapp
image: training/webapp
- name: nginx
image: nginx:alpine
ports:
- containerPort: 80
volumeMounts:
- name: nginx-proxy-config
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
volumes:
- name: nginx-proxy-config
configMap:
name: mc3-nginx-conf
Note that although for the Pod, we define only nginx port 80, port 5000 is still accessible outside of the Pod. Any port which is listening on the default “0.0.0.0” address inside a container will be accessible from the network.
Step 4 Create a new Pod using the definition in the mc3.yaml file:
$ kubectl create -f mc3.yaml
pod "mc3" created
Step 5 Expose the Pod using the NodePort service:
$ kubectl expose pod mc3 --type=NodePort --port=80
service "mc3" exposed
Step 6 Identify port on the node that is forwarded to the Pod:
$ kubectl get service mc3
NAME CLUSTER-IP EXTERNAL-IP PORT(S)
mc3 10.98.243.246 <nodes> 80:30707/TCP
As you see, the port 30707 (may be different in your lab) is allocated for the mc3 service.
Step 7 Use the public IP address of the master node and the port number from previous step:
$ curl http://<lab-ip>:<mc3-port>
Hello world!
Step 8 Clean up the cluster, delete the Service, Pod, and Config Map:
$ kubectl delete service mc3
service "mc3" deleted
$ kubectl delete pod mc3
pod "mc3" deleted
$ kubectl delete cm mc3-nginx-conf
configmap "mc3-nginx-conf" deleted
A Pod can have multiple Containers running apps within it, but it can also have one or more Init Containers, which are run before the app Containers are started. Init Containers are exactly like regular Containers, except:
We will define a pod similar to one we used in 9.2.1. Shared volumes in a Kubernetes Pod: nginx server with shared volume. Instead of the second container, which updated the index.html file each second, we will use two init containers.
Step 1 Create a new YAML file mc4.yaml with the following content:
apiVersion: v1
kind: Pod
metadata:
name: mc4
spec:
volumes:
- name: html
emptyDir: {}
containers:
- name: 1st
image: nginx
volumeMounts:
- name: html
mountPath: /usr/share/nginx/html
initContainers:
- name: init1st
image: debian
volumeMounts:
- name: html
mountPath: /html
command: ["/bin/sh", "-c"]
args:
- echo Hello from init1st >> /html/index.html
- name: init2nd
image: debian
volumeMounts:
- name: html
mountPath: /html
command: ["/bin/sh", "-c"]
args:
- echo Hello from init2nd >> /html/index.html
Step 2 Create a new Pod using the definition in the mc4.yaml file and watch it status:
$ kubectl create -f mc4.yaml && kubectl get pods --show-all -w
pod "mc4" created
NAME READY STATUS RESTARTS AGE
mc4 0/1 Init:0/2 0 0s
mc4 0/1 Init:1/2 0 10s
mc4 0/1 PodInitializing 0 12s
mc4 1/1 Running 0 19s
Press Ctrl-C to stop watching the Pod.
Step 3 Check that index.html contains both lines added by init containers:
$ kubectl exec mc4 -c 1st -- cat /usr/share/nginx/html/index.html
Hello from init1st
Hello from init2nd
Step 4 Clean up the cluster, delete the pod:
$ kubectl delete pod mc4
pod "mc4" deleted
Step 1 Create a new Pod definition in the ah1.yaml file:
apiVersion: v1
kind: Pod
metadata:
name: ah1
spec:
containers:
- image: gcr.io/google_containers/echoserver:1.4
name: echoserver
ports:
- containerPort: 8080
livenessProbe:
httpGet:
path: /
port: 8080
initialDelaySeconds: 15
timeoutSeconds: 1
Step 2 Create a new Pod using the definition from the ah1.yaml file:
$ kubectl create -f ah1.yaml
pod "ah1" created
Step 3 Check that the Pod is running and there is no restarts:
$ kubectl get pod ah1
NAME READY STATUS RESTARTS
ah1 1/1 Running 0
Step 4 Create a new Pod definition in the ah2.yaml file. The differences with the ah1 Pod, is a new name (ah2 instead of ah1) and different port number for the liveness probe (8081 instead of 8080):
apiVersion: v1
kind: Pod
metadata:
name: ah2
spec:
containers:
- image: gcr.io/google_containers/echoserver:1.4
name: echoserver
ports:
- containerPort: 8080
livenessProbe:
httpGet:
path: /
port: 8081
initialDelaySeconds: 15
timeoutSeconds: 1
Step 5 Create a new Pod using the definition from the ah2.yaml file:
$ kubectl create -f ah2.yaml
pod "ah2" created
Container in the ah2 Pod will not respond on port 8081, so we defined a liveness probe that will fail.
Step 6 Wait ~1 minute and check the Pod’s status:
$ kubectl get pod ah2
NAME READY STATUS RESTARTS
ah2 1/1 Running 3
As you see, Kubernetes has restarted our pod several times. Use kubectl describe to see detailed information in the Events section:
$ kubectl describe pod ah2
...
Type Reason Message
-------- ------ -------
Warning Unhealthy Liveness probe failed ...
Normal Killing Killing container ...
...
Step 7 Clean up the cluster, delete the ah1 and ah2 pods:
$ kubectl delete pod ah1
pod "ah1" deleted
$ kubectl delete pod ah2
pod "ah2" deleted
Checkpoint