Docker Architecture Explained: Docker Client, Docker Daemon & Docker Registry

Docker is a platform that helps you run your application in a container. A container is sort of like a virtual machine, but much more lightweight and easier to set up. The idea is that you package up your application and all of its dependencies into a container, and then, you run that container on any machine that has Docker installed. This is really useful in software development. Because it means you can be sure that your code will run the same way on your local machine, on your colleague's machine, and in production. You don’t have to worry about compatibility issues.

One really cool thing about Docker is that it makes your containerized software platform "agnostic". This means that it doesn't matter what kind of computer you're using or what kind of operating system you have. If your computer can run Docker, it can run your software. This is really helpful. You can develop and test your software on one computer, and then easily move it to a totally different computer. It will still work the same way.

The Docker platform provides a set of tools and APIs (Application Programming Interfaces) that enable you to:

  • Create and manage Docker images.
  • Run and manage Docker containers
  • Monitor and log the behavior and performance of your containers
  • Manage the networking and security of your containers

In this blog post, we will start by briefly discussing Docker images and containers - the core building blocks of Docker. Then, we will dive into the three essential components of the Docker architecture: the Docker daemon, the Docker client, and the Docker registry. We will discuss how these components interact with each other to build, run, and manage containers. Let’s begin.

Try the Docker Basic Commands Lab for free

Docker Basic Commands Lab
Docker Basic Commands Lab

We will break down the concept of images and containers in Docker using a fun analogy. Think of it like a recipe and a dish, say a delicious Italian pizza. An image in Docker can be thought of as a recipe for a dish. It contains all the instructions and ingredients needed to create a specific version of the dish. Just like a recipe, an image is a static and reusable piece of information that can be used to create multiple instances of the dish.

A container in Docker is like a specific instance of the dish that is created using the recipe (image). It represents a running instance of the image, with its own unique set of processes and resources. Just like a dish, a container is a dynamic and short-lived entity that is created and destroyed as needed.

Now that we have a basic understanding of Docker images and containers, let's turn our attention to the Docker architecture.

Docker Architecture

Docker uses a client-server architecture. At its core, the Docker architecture consists of three main components:

  • Docker daemon
  • Docker client
  • Docker registry.

The Docker daemon and the Docker client provide the core functionality. The Docker registry is an additional component. These three components work together to build, run, and manage Docker containers.

Docker Daemon

The Docker daemon (known as dockerd) is the actual process that runs the containers. It is a background process that runs on the host operating system and manages the lifecycle of Docker containers. It is responsible for building and running containers, as well as storing and distributing Docker images.

The Docker daemon has a REST API (called the Docker Engine API), which is a way for other programs to talk to it and give it commands. Docker client is an example of such a program. Whenever we use a "docker" command, the client uses the API to tell Docker daemon what it should do. And other programs can interact with the daemon as well, even in an automated way, without requiring manual user input.

Docker Client

The Docker client (docker) is the primary way many users interact with Docker. It is a command-line interface (CLI) utility that allows users to interact with the Docker daemon. Simply put, users can use plain English to write Docker commands (tell the Docker client what they want to do). And then the Docker client tells the Docker daemon what action it should take.

The Docker client communicates with the Docker daemon via the REST API. A REST API is just a way for different pieces of software to talk to each other over the Internet. In the case of Docker, the client uses the REST API to send requests to the daemon to do things like build, run, and manage containers.

Note that the Docker client can run on the same host as the Docker daemon, or it can run on a separate machine and communicate with the Docker daemon over the network.

Docker Registry

A Docker registry is a central storage location for distributing Docker images (uploading and downloading). The most common registry is Docker Hub, where developers can publish their images for others to use. Private registries can also be set up; however, Docker is configured to look for images on Docker Hub by default.

When you use docker pull or docker run commands, Docker automatically retrieves the necessary images from the registry you have configured. On the other hand, when you use docker push, the image you specify gets uploaded to your configured registry.

We've talked about the different pieces of the Docker puzzle - the client, the daemon, and the registry. Next, let's understand how Docker actually works under the hood.

The Technology Behind Docker

Docker is built on top of several features of the Linux kernel. The kernel is the part of the operating system that communicates directly with the computer's hardware and manages the resources (like memory, CPU, and so on) of the system. Let's take a closer look at two of these features: namespaces and control groups.

  • Namespaces: Basically, when you use Docker, it creates a little "container" for your app that's isolated from the rest of the system. This means that the app can't see or interact with other apps or processes running on the same machine. This is useful because it lets you run multiple apps on the same machine without allowing them to negatively interfere with each other.
  • Control groups (cgroups): cgroups set limits on how much of the computer's resources (like CPU, memory, and so on) each container can use. This helps make sure that one container doesn't use up all the resources and leave nothing for the other apps.

Conclusion

Docker is a widely-used, industry-standard tool for building and running containers. As an aspiring DevOps engineer, learning Docker is a must.

If you are interested in learning more about Docker, you might want to consider taking a course or workshop on the subject. Our courses provide a thorough introduction to Docker, giving you the skills and knowledge you need to succeed in the field of DevOps. What's more, they're super easy to understand.

With the skills and knowledge you gain from our courses, you will be ready to explore advanced technologies that build upon containerization, such as container orchestration. So you can explore tools like Kubernetes. You will also become well-equipped to tackle the complex and dynamic challenges of the modern IT world.