If there’s one word that you hear no matter which developer circle you’re running with, it’s Docker. In a short time, Docker has managed to win over developers all over, thanks to its ability to smooth over most compatibility issues.
So what exactly is Docker, and why is it so important?
How I Was Introduced to Docker?
Well, let’s start with my own introduction to Docker. This happened around the time I had to perform what turned out to be an incredibly complex job. I was required to set up an end-to-end stack using different technologies: a web server using Node.js, a messaging system using Redis, a database using MongoDB and an orchestration tool using Ansible, among others.
Any developer will immediately be able to tell you just why we had a lot of issues using all these different components. First of all, we had to ensure compatibility with the version of the operating system we were planning to use. Quite a few times, certain versions of the services were just not compatible and we had to find a different OS!
Secondly, we had to keep checking the compatibility of the services with the libraries and dependencies of the OS. So one version of the service would require one version of a dependent library, but another version would require a different one! And every time our architecture changed, we would have to repeat the process of checking for compatibility between the components and underlying infrastructure. It was a regular old matrix from hell.
Why do you need Docker?
Thirdly, any time we had a new developer on board, setting up their environment would be a nightmare. The new developers would have to follow a large set of instructions and run hundreds of commands. They would have to make sure they were using the right OS and the right version of that OS. They would have to basically set up the environment all by themselves, and it was a long drawn out affair every time!
On top of this, different developers were used to different test and production environments. One developer would be comfortable with using one OS, and another would use a different one. We could never guarantee that the application we were building would run the same way in different environments.
All the frustration I faced had me looking for a solution: something that could help us with the compatibility issues, and which could modify and change the components without affecting other components or the OS. And I found the answer in Docker.
Docker is an open platform that allows you to run each component in separate container, with its own library and dependencies. It’s run on the same VM and OS, but with separate environments and containers. Once we found Docker, things started smoothing over.
All the developers had to do was ensure they had Docker on their systems, irrespective of OS. Just set up configuration once, and get your developers cracking!
Let’s break down some of the concepts we introduced in that last paragraph.
What are containers?
Containers refer to completely isolated environments. They have their own processes or services, their own networking interfaces and their own mounts, just like virtual machines. What’s different about them is that they share the same OS kernel.
Containers have been around for around 10 years, and are definitely not new to Docker. What Docker does is make setting up low level container environments high level, and thus easier.
At this point, it should help to revisit a few basic OS concepts.
All operating systems consist of two things: an OS kernel and a set of software. The kernel is what interacts with underlying hardware. It’s the set of software above that makes the OS different. So if a common Linux kernel is shared among different operating systems, it’s the custom software above it that differentiates the OS from each other.
Sharing the Kernel
Docker containers share an underlying kernel. Say you have an Ubuntu OS with Docker installed on it. Docker can then run any flavor of OS on top of it as long as they’re all based on same kernel, in this case, Linux. Each Docker container has the additional software that makes these OS system, and Docker utilizes the underlying kernel of Docker host which works with all OS.
So is there an OS that does not share the same kernel as this? Yep. Windows. And that means you can’t run a Windows based container on a Docker host with Windows OS. For that you’ll need Docker on a Windows server.
Is that a disadvantage? Nope. Because unlike hypervisors, Docker is not meant to virtualize and run different OS and kernels on the same hardware. Instead it containerizes ships and runs applications.
It’s time we took a look at the difference between containers and Virtual Machines.
Containers vs. Virtual Machines
In order to understand the difference between containers and VMs, it’s necessary to take a look at the hierarchy of their infrastructure.
Docker’s hierarchy runs like this: the underlying hardware infrastructure, then the OS, and then Docker installed on the OS. Docker then manages the containers that run with libraries and dependencies alone.
Virtual machines have the following hierarchy: first the OS on underlying hardware, then the hypervisor and then the VM. Each VM has own OS inside it, then the dependencies, and then the application.
Docker vs Virtual Machines
The VM overhead causes higher utilization of underlying resources as there are multiple virtual OS and kernels running together. VMs also consume higher disk space as each VM is heavy, in gigabyte size. Docker containers, on the other hand, are lightweight and in megabyte size. Docker containers therefore boot up in seconds, while VMs take minutes as the entire OS has to be booted. Docker also has less isolation as more resources are shared in containers like the kernel. VMs have complete isolation as they don’t rely on underlying OS or kernel so you can have different types of OS like Linux or Windows on the same hypervisor. That’s not so on a single Docker host.
How it’s done
There are lots of containerized versions of application readily available today. Most organizations have products containerized and available in a public Docker registry called Docker Hub or Docker Store. For example, you can find images of the most common OS, databases, and other services and tools.
All you have to do is find the images you need and install the Docker host. Once you’ve done that, bringing up an application stack is as easy as running a
docker run command with the name of the image. Running the below command, for example, allows you to run an instance of Ansible on the Docker host.
docker run ansible
Similarly, run an instance of Redis and MongoDB or Node.js using the
docker run command. If you need to run multiple instances of the web service, just add as many instances as you need and configure some sort of load-balancer on the front.
If an instance fails, simply destroy that instance and launch a new one.
That’s a short beginners guide to docker. For a full course on docker with hands-on lab exercises head over to kodekloud.com. KodeKloud provides online, self-paced, interactive, hands-on course on various technologies like Docker, Kubernetes, OpenShift etc as well as Automation tools like Ansible, Puppet and Chef.