Docker Certified Associate Exam Series (Part-6): Docker Engine Security
In the previous blog of this 7-part series, we discussed Networking. This blog dives into Docker engine security.
Security within the IT landscape is as critical as maintaining a strong armed force for a country. To secure the underlying operations and business functions while working with Docker, you must secure the Docker Daemon.
If a hacker gains access to your Docker Daemon, they can perform these unauthorized operations:
- Deleting existing containers that run your applications.
- Deleting volumes that contain crucial data.
- Misusing containers to host their own applications (e.g., bitcoin mining).
- Running privileged containers that grant them root access to your resources.
- Targeting your IT Network or other connected systems within the network.
Docker Security Best Practices
The first step to ensuring security within your Docker platform involves following these Docker best practices:
- Disabling password-based authentication
- Enabling SSH-key-based authentication
- Determining access rights and privileges for users
- Disabling all unused ports
- When enabling access to your Daemon from an external host, expose your ports only to private interfaces within your organization.
- Secure the communication between hosts using mechanisms such as TLS certificates.
Enabling Secure Communication Between Hosts
It is best to use TLS certificates to enable secure communication between hosts. To do so, you can configure a certificate authority CA Server (cacert.pem) and create CA certificates for your server (server.pem and serverkey.pem). Then configure the daemon to read these certificates along with setting the “tls” option to “true”. The configuration file will be as shown:
{
"hosts": ["tcp://192.168.1.10:2376"]
"tls": true,
"tlscert": "/var/docker/server.pem"
"tlskey": "/var/docker/serverkey.pem"
}
Note that the default container port is 2376, which enforces encrypted communication. We can now allow users to access the Docker API by having the DOCKER_HOST
environment variable pointing to the host:
export DOCKER_HOST="tcp://192.168.1.10:2376"
Now set the DOCKER_TLS environment variable to “true” to initiate a secure connection.
export DOCKER_TLS="true"
This sets up encrypted communication between the client and the Docker server. This communication, however, lacks authentication, as anyone who knows the exposed container port can still access the daemon by setting the DOCKER_TLS
environment variable to “true” and pointing the DOCKER_HOST
environment variable to the host.
To enable certificate-based authentication, copy the certificate authority (cacert) to the daemon’s server side, then configure the TLS certificate parameter to true
in the daemon’s JSON file, as shown below:
{
"hosts": ["tcp://192.168.1.10:2376"]
"tls": true,
"tlscert": "/var/docker/server.pem"
"tlskey": "/var/docker/serverkey.pem"
"tlsverify": true,
"tlscert": "/var/docker/caserver.pem"
}
Here the tlsverify
variable enables authentication, while the tlscert
variable will verify client certificates. We will then generate client certificates signed by the certificate authority. To do so, generate client certificates (client.pem and clientkey.pem), then share these securely with the client and the server.
On the client side, activate TLS verification by running this command:
export DOCKER_TLS_VERIFY="true"
Then, pass this in client certificates through the command line or by dropping the certificate into the .docker
directory within the user’s home directory.
docker --tlscert=<> --tlskey=<> --tlscert=<> ps
When we run the above command, the Docker client will automatically pick up client certificates. Only clients with certificates signed by the CA server can access your Docker Daemon.
Namespaces and Capabilities
In Docker, namespaces isolate resources such as process IDs, network interfaces, and file systems. Each container on a host machine has its own set of namespaces, allowing it to operate independently of other containers on the same machine. This helps prevent conflicts between containers and ensures that they can run without interfering with each other.
Namespaces in Docker are implemented using the Linux kernel's namespaces feature, which allows processes to have a different view of the system resources than other processes. This means containers can have a virtualized view of the system, separate from the host machine and other containers running on it.
Docker also allows you to create network policies that apply to specific containers or sets of containers. These policies can control network traffic between containers or restrict access to specific resources within a container.
In Docker, every process has a Process Identifier (PID) attached. When you boot up Linux, the first process (PID: 1) is known as the root process. Depending on the namespace and container runtime, every process can have several PIDs.
Docker also assigns different privileges to different users. There are two main types of users in Docker: root and non-root. Root users have administrative privileges and can create, manage, and delete containers, while Non-Root users lack the super privileges of a Root user.
Here is how you can set a user ID through the Docker Command Line Interface. First, start the container using the command below:
docker run -d --name test --user=1000 ubuntu sleep 1000
Execute this command to attach the terminal to the container shell:
docker exec -it test /bin/bash
Run this command to see the process ID:
ps -aux
You should see an output similar to this:
You can also specify the user ID in the image’s Dockerfile:
FROM ubuntu
USER 1000
Then, build this image by running the command:
docker build -t my-ubuntu-image
Root user privileges only apply to users on the Docker host. Container root users typically have limited capabilities.
Linux Capabilities
Capabilities outline the roles and privileges of various users in a system. The root user is the system’s most powerful user. Root users and processes have unrestricted system access, including:
- Controlling
- Creating
- Killing and managing containers
- Setting IDs
- Network operations
- System operations, and many more.
The full list of user capabilities can be accessed on the location /usr/include/linux/capability.h.
To check the capabilities of a normal running container, execute the following commands sequentially:
docker run -d --name test --user=1000 ubuntu sleep 1000
docker exec -it test /bin/bash
apt update -y apt ;install libcap-ng-utilst
pscap
You should see an output similar to this:
For interacting with the network stack, instead of using --privileged
you should use --cap-add=NET_ADMIN
to modify the network interfaces. Execute the commands below sequentially to achieve this.
docker run -d --name test1 --cap-add MAC_ADMIN ubuntu sleep 1000
apt update -y apt install libcap-ng-utilst
docker exec -it test /bin/bash
pscap
You should see an output similar to this:
CGroups
Control Groups (CGroups) allow the allocation and distribution of resources among different processes and containers.
Resource Limits
All docker processes run on the host’s kernel, sharing kernel resources with other processes. From the host’s point of view, containers are also processes. By default, a container can access unlimited resources within the host.
If necessary, a container could utilize all of a host’s resources, depriving other processes of the same capabilities. In this case, the Docker Engine would start to kill other processes to free up resources. In extreme circumstances, the engine could kill native host processes to preserve system resources.
CPU
When two running processes need the same CPU, each process gets an equal share of CPU time. These processes don’t run concurrently. Instead, they take turns using the CPU, although these changes happen in microseconds, making it appear as though the processes run simultaneously. You can also allocate more CPU time to high-priority applications through CPU timeshares.
In this case, if process A gets a 1024 allocation while process B gets a 512 allocation, it will use the CPU twice as much but with the same computing power.
Docker uses Fair Schedulers to enforce CPU timeshares. Two of the most common schedulers in Docker are the Completely Fair Scheduler and the Realtime Scheduler. To allocate 512 CPU shares for a container, run the command:
docker container run --cpu-shares=512 nginx
You can limit CPU usage by defining the CPUs that a particular process can consume. We do this by specifying the CPU Sets. The format for defining a CPU set is:
docker container run --cpuset-cpus=0-1 webapp
This command ensures that the process uses the first two CPUs in an array within the host. The first CPU is numbered 0. A valid value might be 0-3 (to use the first, second, third, and fourth CPU) or 1,3 (to use the second and fourth CPU).
Additionally, you can also limit the number of CPUs a process can use by specifying the CPU Count using the command:
docker container run --cpus=2.5 nginx
With the above command, if the host machine has three CPUs and you set –cpus=”2.5″, the container is guaranteed at most two and a half of the CPUs.
To add or reduce the CPU count, you can update it with the command:
docker container update --cpus=2.5 container-name
Without the proper mechanisms to limit CPU usage, a process/container could consume too much CPU power, taking up most or all of a host’s resources and making that server unresponsive.
Memory
Every system consists of Physical Memory known as Random Access Memory (RAM). A process can consume as much RAM as is available within the host unless we enforce some limits. When all system RAM has been consumed, Linux will use SWAP space configured on the host for memory allocation.
SWAP space is allocated on physical storage disks that can be used as memory. If a process uses up all RAM and SWAP memory, it is killed using an Out of Memory (OOM) exception.
You can specify the memory limit to be consumed by an application by including it in the command as shown:
docker container run -d --name webapp --memory=512m nginx
You can also specify the SWAP space limit as follows:
docker container run --memory=512m --memory-swap=512m webapp
In this case, you will have 0 MB SWAP memory since the allocation is usually the difference between the two figures. To allocate 256 MB to the SWAP memory, we’ll run the command:
docker container run --memory=512m --memory-swap=768m webapp
If --memory-swap
is set to a positive integer, then both --memory
and --memory-swap
must be set.
To display a live stream of container(s) resource usage statistics:
docker stats
At Kodekloud, we have a comprehensive Docker Associate Associate Exam preparation course. The course explains all Docker concepts included in the certification's curriculum. After each topic, you get interactive quizzes to help you internalize the concepts learned. At the end of the course, we have mock exams that will help familiarize you with the exam format, time management, and question types.
You can now proceed to the next part of this series: Docker Certified Associate Exam Series (Part-7): Docker Engine Storage & Volumes
Below are the previous parts of the Docker Certified Associate Exam Series:
- Docker Certified Associate Exam Series (Part-1): Container Orchestration
- Docker Certified Associate Exam Series (Part-2): Kubernetes
- Docker Certified Associate Exam Series (Part-3): Image Creation, Management, and Registry
- Docker Certified Associate Exam Series (Part-4): Installation and Configuration
- Docker Certified Associate Exam Series (Part-5): Networking
Research Questions
This concludes the Docker Engine Security chapter of the DCA certification exam. To test your knowledge, it is strongly recommended that you access research questions of all core concepts covered in the coursework and a test to prepare you for the exam. You can also send your feedback to the course developers, whether you have feedback or would like something changed within the course.
Here is a quick quiz to help you assess your knowledge. Leave your answers in the comments below and tag us back.
Quick Tip – Questions below may include a mix of DOMC and MCQ types.
1. What is a Linux feature that prevents a process within the container from performing filesystem-related operations, such as altering attributes of certain files?
[A] Control Groups (CGroups)
[B] Namespaces
[C] Kernel Capabilities
[D] Network Namespaces
2. What flags are used to configure encryption on the Docker daemon without any authentication?
[A] tlsverify, tlscert, tlskey
[B] key, cert, tls
[C] tls, tlscert, tlskey
[D] host, key, cert, tls
3. What will happen if the <code>–memory-swap</code>
is set to 0?
[A] The container does not have access to swap
[B] The container is allowed to use unlimited swap
[C] The setting is ignored, and the value is treated as unset
4. By default, all containers share the same number of CPU cycles. How can the shares be modified?
[A] docker container run --cpu-shares=512 webapp
[B] docker container run --cpuset-cups=512 webapp
[C] docker container run --cpu-quota=512 webapp
[D] docker container run --cpus=512 webapp
5. Limit the container webapp to only use the first CPU or core. Select the right command.
[A] docker container run --cpuset-shares=1 webapp
[B] docker container run --cpus=0 webapp
[C] docker container run --cpuset-cpus=0 webapp
6. Assume that you have 1 CPU; which of the following commands guarantees the container at most 50% of the CPU every second?
[A] docker run -it --cpu-shares=512 ubuntu /bin/bash
[B] docker container run --cpuset-cups=.5 webapp
[C] docker run -it --cpus=".5" ubuntu /bin/bash
[D] docker run -it --cpus=".5" --cpuset-cups=1 ubuntu /bin/bash
7. What is a Linux feature that allows the isolation of containers from the Docker host?
[A] Control Groups (CGroups)
[B] Namespaces
[C] Kernel Capabilities
[D] LXC
8. By default, a container has no resource constraints.
[A] True
[B] False
Conclusion
The study guide covers important topics related to Docker Engine Security and other aspects of Docker architecture, setup, and configuration. Understanding container resource management and isolation features such as CPU allocation and Linux kernel features like Control Groups (CGroups) and Namespaces is essential.
Good luck with your exam preparation!