Certified Kubernetes Administrator Exam Series (Part-4): Application Lifecycle Management

In the previous blog of this 10-part series, we discussed Logging & Monitoring. This section introduces various strategies used to manage the lifecycle of Kubernetes applications, ensuring high availability and continuously improving performance. 

Here are the eight other blogs in the series:

When deploying a Kubernetes application into production, there are several strategies to stage and update cluster resources without affecting availability. This allows application users to have uninterrupted access even when developers are making changes and pushing updates to the application.

Rolling Updates & Rollback 

Rollouts and Versioning

When a deployment is created, it triggers a rollout. Every new rollout, in turn, creates a deployment version, let’s call the first Revision1 (for our reference). When an application is upgraded by updating containers, a new rollout is triggered, creating a newer deployment revision, Revision2.

This makes it easy to keep track of changes in the deployment and roll them back whenever necessary. Below is a manifest file that creates a new deployment, myapp-deployment:

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-deployment
  labels:
    app: web
spec:
  replicas: 2
  selector:
    matchLabels:
      app: web
  template:
    metadata:
      labels:
        app: web
    spec:
      containers:
        - name: front-end
          image: nginx
          ports:
            - containerPort: 80
        - name: rss-reader
          image: nickchase/rss-php-nginx:v1
          ports:
            - containerPort: 88

To display running deployments, use the command:

kubectl get deployment

To check the status of a rollout, use the kubectl rollout status command:

kubectl rollout status deployment/myapp-deployment

To update the image of the application to nginx:1.17 use this command:

kubectl set image deploy myapp-deployment front-end=nginx:1.17

To view the current image version of the app, run the describe pods command:

kubectl describe pods

In the Image field of the output, verify that you are running the latest image nginx:1.17

To roll back the deployment to your last working version, use the rollout undo command:

kubectl rollout undo deployment myapp-deployment

For rollout history, use this command:

kubectl rollout history deployment/myapp-deployment

Deployment Strategy 

There are two kinds of deployment strategies in Kubernetes: recreate and rolling update.

A. Recreate strategy

In the Recreate strategy, all old POD instances are brought down before newer instances are brought up. This introduces a problem of application unavailability during an update.

The recreate strategy can be desirable for a number of reasons:

  • When updated applications require prerequisite data and configuration settings
  • The POD is mounted with a ReadWriteOnce volume that cannot be shared with other replicas
  • The application doesn’t support multiple versions 

B. Rolling Updates

Rolling Updates is a strategy that ensures high availability by incrementally replacing POD instances. Under this strategy, POD instances are taken down and replaced one by one, making the update process seamless. Kubernetes assumes rolling updates to be the default rollout strategy and performs such updates when the strategy is not specified.

.spec.strategy specifies the strategy used to replace old Pods with new ones. .spec.strategy.type can be “Recreate” or “RollingUpdate”.

Docker Commands and Arguments

When we run a docker container built on the Ubuntu OS image:

docker run ubuntu

This initiates the container run and then exits immediately. To check the number of running containers, use the command:

docker ps

The above command will not list stopped containers. For checking all, including stopped containers:

docker ps -a

This shows it's in Exited state:

CONTAINER ID   IMAGE                                 COMMAND                  CREATED          STATUS                      PORTS                                                                                                      NAMES
f4a947ab4d03   ubuntu                                "/bin/bash"              58 seconds ago   Exited (0) 53 seconds ago                                                                                                              pedantic_chaum

This is because containers are made to run specific tasks/processes and not Operating Systems. Once the task is complete, the container exits. A container, therefore, only lives for as long as the task running inside it is alive.

Commands and arguments define the processes that run in a container once it starts. This can be seen in the configuration files for popular Docker images. The CMD argument for the nginx image is [“nginx”]. For the mysql official image, this argument is [“mysqld”]

The Ubuntu container that ran earlier is a plain OS image. It uses [“bash”] as its default CMD argument. bash is not a process, but it listens for commands from the CLI terminal. If there are no commands from the terminal, the container exits.

One way to override a container’s default commands is to append the docker run command when initiating the container:

docker run ubuntu sleep 5

In the case above, the container will start, run the sleep program, wait for 5 seconds, and then exit. It is also possible to use the OS as the base image and specify a command in a new Dockerfile. This way, the container will always run this command when it starts. For instance, it is possible to create a Dockerfile named ubuntu-sleeper with the following specs:  

FROM ubuntu
CMD sleep 5

The CMD arguments can be specified in both shell and exec forms as:

 CMD command param1 for instance CMD sleep 5.

CMD [“command”, “param”] for instance  CMD [“sleep”, “5”].

The new image can be built using the docker build command:

docker build -t ubuntu-sleeper .

The container is then run using the command:

docker run ubuntu-sleeper

By default, a container will always start, run the sleep program, wait for 5 seconds, and then exit. To change the number of seconds it sleeps, this can be appended to the docker run command:

docker run ubuntu-sleeper sleep 10

In this case, the command that runs at startup is sleep 10 so the container starts, runs the sleep program, waits for 10 seconds, and then exits.

If one wants to specify the number of seconds in the command line while the sleep program executes automatically, it is best to use the ENTRYPOINT instruction. This instruction specifies the program that executes when the container is running, but the number of seconds can be specified on the CLI. The Dockerfile for this container would be:

FROM ubuntu
ENTRYPOINT ["sleep"] 

Running the command docker run ubuntu-sleeper 10 means that the instruction executed at container runtime is sleep 10. Running this container without a command-line argument returns a ‘missing operand’ error. To configure the application with a default argument value (number of seconds), it is best to combine ENTRYPOINT and CMD instructions, as shown below:  

FROM Ubuntu
ENTRYPOINT ["sleep"] 
CMD ["5"]

In this case, if no command-line arguments are specified, the sleep 5 program is executed when the container is running. If an argument is specified on the command line, the CMD instruction is ignored, for instance:

docker run ubuntu-sleeper 10 will run the container with the sleep 10 program. When it is necessary to modify the ENTRYPOINT append the docker run command with the newer --entrypoint specified as a flag:

docker run --entrypoint echo ubuntu-sleeper 10

Kubernetes Commands and Arguments

The previous section explored the configuration of commands and arguments in Dockerfiles using ENTRYPOINT and CMD instructions. In Kubernetes, these instructions can be specified in the definition files for the PODs running these docker containers. Let’s consider the YAML definition file for the ubuntu-sleeper POD:

apiVersion: v1
kind: Pod
metadata:
  name: ubuntu-sleeper-pod
spec:
  containers:
  - name: ubuntu-sleeper
    image: ubuntu-sleeper
    args: ["10"]

The args field in the POD definition file is the Kubernetes equivalent of the CMD instruction in a Dockerfile. To represent an ENTRYPOINT in a POD definition file, the command field is used: 

apiVersion: v1
kind: Pod
metadata:
  name: ubuntu-sleeper-pod
spec:
  containers:
  -  name: ubuntu-sleeper
     image: ubuntu
     command: ["sleep"]
     args: ["10"]

Configuring Environment Variables

Environment variables in Kubernetes are configured using the env property. This property is an array, and a number of items can be specified under the property. Each item has a name and a value property, as shown in the simple POD definition file below: 

apiVersion: v1
kind: Pod
metadata:
  name: simple-webapp-color
spec:
  containers:
  - name: simple-webapp-color
    image: kodekloud/simple-webapp
    ports: 
    - containerPort: 8080
    env:
    - name: APP_COLOR
      value: pink

To list the Pod’s container environment variables, use this command:

kubectl exec simple-webapp-color -- env

This is known as the plain key-value of setting environment variables. The variables can also be set using configMaps and secrets, which are explored in the coming sections.

Configuring ConfigMaps

In a cluster that uses numerous Pod instances, it gets difficult to manage the environment data stored across multiple YAML definition files. Environment variable information can be taken out of individual Pod definition files and managed centrally using objects known as configMaps. 

These objects store environment data in the form of key-value pairs and pass them to applications when injected into the Pod definition files. 

ConfigMaps can be created in two ways: imperatively and declaratively.  

With the imperative method, the configMap is created by stating the key-value pairs directly in the command line, i.e:

kubectl create configmap \
<config-name> –from-literal=<key>=<value>
                        –from-literal=<key2>=<value2>

 For instance, we could create the configMap app-config by running the command: 

kubectl create configmap \
<config-name> --from-literal=APP_COLOR=blue \
             --from-literal=APP_MOD=prod

If there are many environment variables, the command may get too complex, and the variables can be managed by writing them onto a file imperatively:

kubectl create configmap \
<config-name> --from-file= file-path

Data is read from this file and stored as environment data in the config-name file. In the declarative method, a YAML file is used to store the configuration data. For instance, the configMap specifications for the sample app simple-webapp-color would be: 

apiVersion: v1
kind: ConfigMap
metadata:
  name: app-config
data:
  APP_COLOR: blue
  APP_MOD: prod

The configMap data is then injected into the Pod by specifying it in a YAML definition file as follows:

apiVersion: v1
kind: Pod
metadata:
  name: simple-webapp-color
spec:
  containers:
  - name: simple-webapp-color
    image: kodekloud/simple-webapp
    ports: 
    - containerPort: 8080
    envFrom:
    - configMapRef:
        name: app-config

To list the Pod’s container environment variables, which are managed by configMaps, use the command:

kubectl exec simple-webapp-color -- env

To list the configMaps, use the command:

kubectl get configmaps

To get the data of the app-config configMap, use the command:

kubectl describe configmaps app-config

Secrets

ConfigMaps are great for storing environment data to be used by resources within the Kubernetes cluster. They are, however, not suitable for sensitive data that require high security. Secrets store information in an encoded format, making them suitable for highly sensitive data. 

Just like configMaps, secrets can be created imperatively or declaratively. Let’s assume we are trying to create a secret as follows:

DB_Host: MySQL
DB_User: root
DB_Password: paswrd

Using the imperative method, a secret is created directly on the command line without the need for a definition file, i.e:

kubectl create secret generic \
<secret-name> --from-literal=<key>=<value> \
              --from-literal=<key2>=<value2>

To encode the values, the following command is used on a Linux host:

echo -n '<value>' | base64

Since we are trying to encode: mysqlroot and paswrd, our secret values will be bXlzcWw=cm9vdA== and cGFzd3Jk respectively.

So the imperative command will be: 

kubectl create secret generic \
app-secret --from-literal=DB_Host=bXlzcWw= \
             --from-literal=DB_User=cm9vdA== \
             --from-literal=DB_Passowrd=cGFzd3Jk

This command also gets complicated when there are numerous environment variables to specify. The secrets can also be written onto files, as follows:

kubectl create secret \
<secret-name> --from-file= file-path

Using the declarative method, the secret is first created in a YAML definition file:

apiVersion: v1
kind: Secret
metadata:
  name: app-secret
data:
  DB_Host: bXlzcWw=
  DB_User: cm9vdA==
  DB-Password: cGFzd3Jk

The secret is then created by running the command:

kubectl create -f secret-data.yaml

To view secrets, use the command:

kubectl get secrets

The newly created secret can be inspected using the command:

kubectl describe secrets

To get the secrets as presented in YAML format, run this command:

kubectl get secret app-secret -o yaml

The secrets can be decoded back to plain text format by running the following command on a Linux host:

echo -n '<value>' | base64 --decode

To inject a secret into a POD, it is specified as secretRef under envFrom in the POD definition file:

apiVersion: v1
kind: Pod
metadata:
  name: simple-webapp-color
spec:
  containers:
  - name: simple-webapp-color
    image: kodekloud/simple-webapp
    ports: 
    - containerPort: 8080
    envFrom:
    - secretRef:
        name: app-secret

To list the Pod’s container environment variables, use this command:

kubectl exec simple-webapp-color -- env

Multi-Container PODs

Microservices architecture allows for the creation of light, agile, and reusable code. This makes it easy to scale the application up and down and easily manage deployments by updating only those services requiring an upgrade. In some cases, services may need to work together without having to be merged.

While they operate with the same number of replicas, they are developed and deployed independently. Deploying these services in Multi-Container Pods will result in them sharing resources and a lifecycle, making it easier to communicate since they can refer to each other as localHost

To create a multi-container Pod, the new container’s information is added to the containers section under spec in the Pod definition file, as shown:

apiVersion: v1
kind: pod
metadata:
  name: simple-webapp-color
spec:
  containers:
  - name: simple-webapp-color
    image: kodekloud/simple-webapp
    ports: 
      - containerPort: 8080
  - name: log-agent
    image: log-agent

This concludes the Application Lifecycle Management section of the CKA certification exam.

You can now proceed to the next part of this series: Certified Kubernetes Administrator Exam Series (Part-5): Cluster Maintenance.

Here is the previous part of the series: Certified Kubernetes Administrator Exam Series (Part-3): Logging & Monitoring.

Research Questions

Here is a quick quiz to help you assess your knowledge. Leave your answers in the comments below and tag us back. 

Quick Tip – Questions below may include a mix of DOMC and MCQ types.

1. Which command can be used to inspect the deployment and identify the number of Pods deployed by it?

[A] kubectl inspect pod

[B] kubectl inspect deploy

[C] kubectl describe deployment

[D] kubectl describe pod

2. There are two kinds of deployment strategies in Kubernetes: The Recreate and Rolling Update Strategies.

[A] True

[B] False

3. Upgrade the application by setting the image on the deployment to kodekloud/web app-color:v3 

Deployment Name: frontend
Deployment Image: kodekloud/webapp-color:v3
ContainerName: frontend

Solution:

kubectl set image deploy/frontend frontend=kodekloud/web app-color:v3

4. Create a pod with the Ubuntu image to run a container to sleep for 5000 seconds.

Pod Name: ubuntu-sleeper-2
Command: sleep 5000

Solution:

apiVersion: v1 
kind: Pod 
metadata:
  name: ubuntu-sleeper-2 
spec:
  containers:
  - name: ubuntu
    image: ubuntu
    command:
      - "sleep"
      - "5000"

5. Create a pod with the given specifications. By default, it displays a blue background. Set the given command-line arguments to change it to green.

Pod Name: webapp-green
Image: kodekloud/webapp-color
Command line arguments: --color=green

Solution:

apiVersion: v1 
kind: Pod 
metadata:
  name: webapp-green
  labels:
      name: webapp-green 
spec:
  containers:
  - name: simple-webapp
    image: kodekloud/webapp-color
    args: ["--color", "green"]

6. The Recreate strategy is the default rollout strategy.

[A] True

[B] False

7. Add the environment variable on the POD to display a pink background.

Pod Name: webapp-color
Image Name: kodekloud/webapp-color
Env: APP_COLOR=pink

Solution:

apiVersion: v1
kind: Pod
metadata:
  name: webapp-color
spec:
  containers:
  - env:
    - name: APP_COLOR
      value: pink
    image: kodekloud/webapp-color
    name: webapp-color

8. Create a new ConfigMap for the webapp-color POD. Use the spec given below.

ConfigName Name: webapp-config-map
Data: APP_COLOR=darkblue

Solution:

$ kubectl create configmap web app-config-map --from-literal=APP_COLOR=darkblue

9. Create a multi-container pod with 2 containers. Use the spec given below.

If the pod goes into the crashloopbackoff then add sleep 1000 in the lemon container.

PODName: yellow
Container 1 Name: lemon
Container 1 Image: busybox
Container 2 Name: gold
Container 2 Image: redis

Solution:

apiVersion: v1
kind: Pod
metadata:
  name: yellow
spec:
  containers:
  - name: lemon
    image: busybox
    command:
      - sleep
      - "1000"

  - name: gold
    image: redis

Conclusion 

This part of the course offers an in-depth introduction to how application deployments behave in the production environment. By exploring the different rollout strategies, this section outlines information with guidance on choosing the right deployment strategy for different kinds of applications. KodeKloud’s curriculum also includes practicals on how to perform rollbacks when the updates result in undesired changes in the application. 

This section is crucial for prospective Kubernetes administrators since it is a guide on how to manage applications from inception to retirement. With the knowledge explored in this section, the candidate can easily create running applications based on CMD and ENTRYPOINT instructions, and manage application environments using Environment Variables, ConfigMaps, and Secrets.

Exam Preparation Course

Our CKA Exam Preparation course explains all the Kubernetes concepts included in the certification’s curriculum. After each topic, you get interactive quizzes to help you internalize the concepts learned. At the end of the course, we have mock exams that will help familiarize you with the exam format, time management, and question types.

Explore our CKA exam preparation course curriculum.

Enroll Now!