Container Runtime Interfaces In K8s

Introduction
In this article, we will be discussing what a container runtime interface (CRI) is, why is it important and what functionality it adds to Kubernetes.
What are Container Runtimes?
You may have read about and heard the terms describing Kubernetes as a “container orchestration tool” that is actually right and it does exactly “just” that. However, we need to pinpoint that Kubernetes itself is actually not the one creating, starting, or stopping your containers whenever you do kubectl commands. Inside Kubernetes, there is another layer where the communication is made to a small component that is responsible for managing your containers and this is called a container runtime. If you have been using Kubernetes for a long time and were able to use earlier versions, you may have noticed that the interface for managing your containers has been primarily via the Docker Engine. This allows users to interact with the containers created by Kubernetes with docker commands as well. The diagram below illustrates running kubectl commands sent over to the Dockershim component in order to manage containers. Dockershim is a special integration component that enables Kubernetes to communicate with Docker.

The Open Container Initiative (OCI)
In around June 2015, The Open Container Initiative was created by leading container industries including Docker. The goal is to provide a specified structure that will become the industry standard for containers and runtimes. This is what set the standards for the thousands of different images in image repositories such as Docker Hub, AWS Amazon Elastic Container Registry (ECR), and Google container registry. The same standard also paved the way to enable Kubernetes to support different container runtimes other than Docker Engine.
What is Container Runtimes Interface (CRI)
Kubernetes aims to give users more options and to be able to use different container runtimes with different sets of functionality and features. And thus, the Container Runtime Interface (CRI) standard was created. It was derived from the Open Container Initiative (OCI) standard and the idea is as long as the container runtime adheres to the standards set, it will be then compatible with Kubernetes. Here’s a list of container runtimes that meets the CRI standard today:
- Containerd - Backed by CNCF, it was the 5th project to graduate. It has more than 400+ contributors in its GitHub repository and is primarily written in Go Language. It is the most mature container runtime available to Kubernetes. It touts low CPU and memory usage and is presented as a very stable runtime (if your container crashes, it does not corrupt or mess with your data)
- Cri-O - The CRI pertains to the CRI standard and O refers to the Open Container Initiative (OCI). Unlike the other container runtimes mentioned here, It is custom-built to be used for Kubernetes. Its GitHub repository has more than 200 contributors and is also written in Go Language. CRI-O puts emphasis on security and low resource overhead by its containers.
- Kata Containers - Started as an open-source project from the OpenStack Foundation, it aims to closely match the isolation and security benefits you can get from a Virtual Machine while also not sacrificing a ton for the performance. For Kubernetes, kata containers also utilize CRI-O for the Container Runtime Interface.
- Mirantis Container Runtime - formerly known as Docker Enterprise Edition, it is a commercial container runtime solution from Mirantis and Docker. It uses cri-dockerd, a standalone shim adapter that allows communication with Docker from Kubernetes. Its GitHub repository has 150+ contributors and is also written in Go.
A note on Docker Engine
Docker Engine on its own is not compliant with the CRI standard. Kubernetes as a temporary solution came up with Dockershim in order to bridge the communication between Docker and Kubernetes and starting version 1.20, Kubernetes announced surprising news that it will begin deprecating the Dockershim component in the Kubernetes package (this was always included previously whenever you set up a new cluster) and will completely remove for versions 1.24 onwards. You can still use Docker Engine as your container runtime but you will then have to use a different adapter called cri-dockerd. Check out the article here, for more details on the removal of Dockershim in Kubernetes.
Container runtime hands-on
Now that we understand what container runtimes are, let’s go ahead and do some hands-on practice. For this, we’ll use a tool called Minikube. Minikube is a single node, local Kubernetes sandbox that is used for testing, learning, and practicing commands with Kubernetes. If you don’t have it yet, you can follow the installation guide here. Once installed, you can easily create a new Kubernetes cluster with a simple command. We’ll start our minikube instance and use the default container-runtime, Docker Engine. You can also use other container runtimes such as Containerd and Cri-O as specified here, although choosing these may require additional setup.
Let’s start our minikube with the following command.
minikube start
Now that our minikube has started. Let's first confirm which container runtime is being used in our instance with the following command:
kubectl get node -o=jsonpath="{.items[0].status.nodeInfo.containerRuntimeVersion}"
Output:
docker://20.10.8
We see that it’s using the Docker Engine for the container runtime, then let's now proceed to create an nginx pod in the cluster.
kubectl run nginx --image=nginx
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 30s
The pod is now in a running state, let us get the id of the container that is running inside the nginx pod.
kubectl get pod -l run=nginx -o=jsonpath="{.items[0].status.containerStatuses[0].containerID}"
docker://126a4911f5fbb4889416c8f6e9ce942cddfc39f0556aa7d1e25a42d845206ee2
Remember that Kubernetes is not the one responsible for creating or removing your containers. It simply sends an instruction to the container runtime, which then performs the creation of the container on behalf of Kubernetes. And here, we see that the pod created with the kubectl command is now associated with a docker container.
Since we’re using Docker Engine as the container runtime. Any containers that were created in the Kubernetes cluster can now also be managed directly using Docker commands.
If we list the containers currently running in our Docker instance, we see that one of the containers matches the container id (first 12 characters) of our nginx pod.
docker exec minikube docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
126a4911f5fb nginx "/docker-entrypoint...." 29 minutes ago Up 29 minutes k8s_nginx_nginx_default_7b578365-b9bc-4876-bc1d-2a7bd5bbe20d_0
Now let’s try to remove the container using docker commands and see how Kubernetes will react to the change.
docker exec minikube docker rm -f 126a4911f5fb
Then let’s list the pods in Kubernetes.
kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx 0/1 ContainerCreating 1 33m
Notice how our nginx status changed from Running to ContainerCreating. This is an indication that when we removed the container directly using docker commands, Kubernetes reacted automatically and sent an instruction again to the container-runtime to create another container for the pod.
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 1 37m
After a while, the pod’s status should become Running again. Let’s go and inspect the new container information running in this pod with the command:
kubectl get pod -l run=nginx -o=jsonpath="{.items[0].status.containerStatuses[0].containerID}"
docker://b40e4dc9b0a5652be1e4be2765461117eafdbf307d08795abf1bf1f346194c98
Notice how we get a different container id. This means a different container is now in place (replacing the old one that we removed) for our nginx pod in Kubernetes. It will be the responsibility of Kubernetes to maintain the operability of the pods by sending instructions to the container runtime. It can be an action to create a new container, it can also be an action to remove the container. This should show a bit of the “Container Orchestration” feature of Kubernetes in action.
Container standards that makes your life easier
Thanks to the Container Runtimes Interface (CRI) specification, no matter which container runtime you choose, the behavior should be similar between all of them. What will set them apart however, will be the CLI commands that you will use to interact with the container as well as the different set of features they come with.
So depending on what you're focusing on, whether it will be ease of use, performance and/or security or all of them, you’ll have quite a number of options to choose from. And the best part is that since most of them are compliant with Open Container Initiative OCI, it won’t be as if you just choose one and you’re locked in. Definitely not, it is actually possible to switch between the different container runtimes. So if you decide one day that you want to try out another container runtime, you can easily do that without affecting your already running containers as well as overhauling your Kubernetes cluster.
Conclusion
Container runtimes are one of the most integral parts of Kubernetes. In the future, we might see more container runtimes developed with better resource management and security features added to the list. The Container Runtimes Interface (CRI) and Open Container Initiative (OCI) provides us the extensibility and flexibility to choose the container runtime with the features we require and fit our container needs at the moment. And finally, as an endnote, If you have an older kubernetes cluster and you’re planning to upgrade to the latest version, then you may need to consider first the impact of having Dockershim removed before proceeding.
If you want to learn more about Docker and Kubernetes, checkout KodeKloud learning path from beginner to advanced it contains a step-by-step guide that will help you improve your skills with these tools and better prepare you in your DevOps career.