Docker Certified Associate Exam Series (Part-6): Docker Engine Security

In the previous blog of this 7-part series, we discussed Networking. This blog dives into Docker engine security.

Security within the IT landscape is as critical as maintaining a strong armed force for a country. To secure the underlying operations and business functions while working with Docker, you must secure the Docker Daemon.

Welcome to KodeKloud!

We are the #1 DevOps courses provider. Register today to gain access to the richest collection of DevOps courses and labs and try sample lessons of all our courses.

No credit card required!


If a hacker gains access to your Docker Daemon, they can perform the following unauthorized operations:

  • Deleting existing containers that run your applications.
  • Deleting volumes that contain crucial data.
  • Misusing containers to host their own applications (e.g., bitcoin mining)
  • Running privileged containers that grant them root access to your resources.
  • Targeting your IT Network or other connected systems within the network.

Docker Security Best Practices

The first step to ensuring security within your Docker platform involves following these Docker best practices:

  • Disabling password-based authentication
  • Enabling SSH-key-based authentication
  • Determining access rights and privileges for users 
  • Disabling all unused ports
  • When enabling access to your Daemon from an external host, make sure you expose your ports only to private interfaces within your organization. 
  • Secure the communication between hosts using mechanisms such as TLS certificates.

Enabling Secure Communication Between Hosts

It is best to use TLS certificates to enable secure communication between hosts. To do so, you can configure a certificate authority CA Server (cacert.pem) and create CA certificates for your server (server.pem and serverkey.pem). Then configure the daemon to read these certificates along with setting the tls” option to “true”. The configuration file will be as shown:

"hosts": ["tcp://"]
"tls": true,
"tlscert": "/var/docker/server.pem"
"tlskey": "/var/docker/serverkey.pem"

Note that the default container port is 2376, which enforces encrypted communication. We can now allow users to access the Docker API by having the DOCKER_HOST environment variable pointing to the host:

export DOCKER_HOST="tcp://"

Now set the DOCKER_TLS environment variable to “true” to initiate a secure connection.

export DOCKER_TLS="true"

This sets up encrypted communication between the client and the Docker server. This communication, however, lacks authentication, as anyone who knows the exposed container port can still access the daemon by setting the DOCKER_TLS environment variable to “true” and pointing the DOCKER_HOST environment variable to the host.

To enable certificate-based authentication, copy the certificate authority (cacert) to the daemon’s server side, then configure the TLS certificate parameter to true in the daemon’s JSON file as shown below: 

"hosts": ["tcp://"]
"tls": true,
"tlscert": "/var/docker/server.pem"
"tlskey": "/var/docker/serverkey.pem"
"tlsverify": true,
"tlscert": "/var/docker/caserver.pem"

Here the tlsverify variable enables authentication, while the tlscert variable will verify client certificates. We will then generate client certificates signed by the certificate authority. To do so, generate client certificates (client.pem and clientkey.pem), then share these securely with the client and the server.

On the client side, activate TLS verification using this command:

export DOCKER_TLS_VERIFY="true"

Then pass this in client certificates through the command line or by dropping the certificate into the .docker directory within the user’s home directory. 

docker --tlscert=<> --tlskey=<> --tlscert=<> ps

The Docker client will automatically pick up client certificates when we run the above command. Now, only clients with certificates signed by the CA server can access your Docker Daemon.

Namespaces and Capabilities

In Docker, namespaces provide a way to isolate resources such as process IDs, network interfaces, and file systems. Each container that runs on a host machine has its own set of namespaces, which allows it to operate independently of other containers on the same machine. This helps prevent conflicts between containers and ensures that they can run without interfering with each other. 

Namespaces in Docker are implemented using the Linux kernel's namespaces feature, which allows processes to have a different view of the system resources than other processes. This means that containers can have their own virtualized view of the system, which is separate from the host machine and other containers running on it. 

Docker also provides the ability to create network policies that apply to specific containers or sets of containers. These policies can be used to control network traffic between containers or to restrict access to specific resources within a container.

In Docker, every process has a Process Identifier (PID) attached to it. When you boot up Linux, the first process to come on (PID: 1) is known as the root process. Every process can have several PIDs, depending on the namespace and container runtime.

Docker also assigns different privileges to different users. There are two main types of users in Docker: root and non-root. Root users have administrative privileges and can create, manage, and delete containers, while Non-Root users are those without the super privileges of a Root user. 

Here is how yu can set a user ID through the Docker Command Line Interface. First start the container using the command below:

docker run -d --name test --user=1000 ubuntu sleep 1000

Execute this command to attach the terminal to the container shell:

docker exec -it test /bin/bash

Run this command to see the process ID:

ps -aux

You should see an output similar to this:

You can also specify the user ID in the image’s Dockerfile:

FROM ubuntu
USER  1000

Then build this image by running the command:

docker build -t my-ubuntu-image

Root user privileges only apply to users on the Docker host. Container root users typically have limited capabilities.

Linux Capabilities

Capabilities outline the roles and privileges of various users in a system. The root user is the system’s most powerful user. Root users and processes have unrestricted system access, including: 

  • Controlling
  • Creating
  • Killing and managing containers 
  • Setting IDs 
  • Network operations 
  • System operations, and many more.

The full list of user capabilities can be accessed on the location /usr/include/linux/capability.h.

To check the capabilities of a normal running container, execute the following commands sequentially:

docker run -d --name test --user=1000 ubuntu sleep 1000
docker exec -it test /bin/bash
apt update -y apt ;install libcap-ng-utilst

 You should see an output similar to this:

For interacting with the network stack, instead of using --privileged you should use --cap-add=NET_ADMIN to modify the network interfaces. Execute the commands below sequentially to achieve this.

docker run -d --name test1 --cap-add MAC_ADMIN ubuntu sleep 1000
apt update -y apt install libcap-ng-utilst
docker exec -it test /bin/bash

You should see an output similar to this:


Control Groups (CGroups) allow the allocation and distribution of resources among different processes and containers.

Resource Limits

All docker processes run on the host’s kernel, and they share kernel resources with other processes. From the host’s point of view, containers are also processes. By default, a container is allowed access to unlimited resources within the host. If necessary, a container could utilize all of a host’s resources, depriving other processes of the same capabilities. In this case, the Docker Engine will start to kill other processes in order to free up resources. In extreme circumstances, the engine could kill native host processes to preserve system resources.


When two running processes need the same CPU, each process gets an equal share of CPU time. These processes don’t run concurrently. Instead, they take turns using the CPU, although these changes happen in microseconds, making it appear as though the processes run simultaneously. You can also allocate more CPU time to high-priority applications through CPU timeshares.

In this case, if process A gets a 1024 allocation while process B gets a 512 allocation, it will use the CPU twice as much but with the same computing power.

Docker uses Fair Schedulers to enforce CPU timeshares. Two of the most common schedulers in Docker are the Completely Fair Scheduler and the Realtime Scheduler. To allocate 512 CPU shares for a container, run the command:

docker container run --cpu-shares=512 nginx

You can limit CPU usage by defining the CPUs that a particular process can consume. We do this by specifying the CPU Sets. The format for defining a CPU set is:

docker container run --cpuset-cpus=0-1 webapp

This command ensures that the process uses the first two CPUs in an array within the host. The first CPU is numbered 0. A valid value might be 0-3 (to use the first, second, third, and fourth CPU) or 1,3 (to use the second and fourth CPU).

Additionally, you can also limit the number of CPUs a process can use by specifying the CPU Count using the command:

docker container run --cpus=2.5 nginx

With the above command, if the host machine has three CPUs and you set –cpus=”2.5″, the container is guaranteed at most two and a half of the CPUs.

To add or reduce the CPU count, you can update it with the command:

docker container update --cpus=2.5 container-name

Without the proper mechanisms to limit CPU usage, a process/container could consume too much CPU power, taking up most or all of a host’s resources making that server unresponsive.


Every system consists of Physical Memory known as Random Access Memory (RAM). A process can consume as much RAM as is available within the host, unless we enforce some limits. When all system RAM has been consumed, Linux will use SWAP space configured on the host for memory allocation. SWAP space is space allocated on physical storage disks that can be used as memory. If a process uses up all RAM and SWAP memory, it is killed using an Out of Memory (OOM) exception.

You can specify the memory limit to be consumed by an application by including it in the command as shown:

docker container run -d --name webapp --memory=512m nginx

You can also specify the SWAP space limit as follows:

docker container run --memory=512m --memory-swap=512m webapp

In this case, you will have 0 MB SWAP memory since the allocation is usually the difference between the two figures. To allocate 256 MB to the SWAP memory, we’ll run the command:

docker container run --memory=512m --memory-swap=768m webapp

If --memory-swap is set to a positive integer, then both --memory and --memory-swap must be set.

To display a live stream of container(s) resource usage statistics:

docker stats

At Kodekloud, we have a comprehensive Docker Associate Associate Exam preparation course. The course explains all Docker concepts included in the certification's curriculum. After each topic, you get interactive quizzes to help you internalize the concepts learned. At the end of the course, we have mock exams that will help familiarize you with the exam format, time management, and question types.

Docker Certified Associate Exam Course | KodeKloud
Prepare for the Docker Certified Associate Exam Course

You can now proceed to the next part of this series: Docker Certified Associate Exam Series (Part-7): Docker Engine Storage & Volumes

Below are the previous parts of the Docker Certified Associate Exam Series:

Research Questions

This concludes the Docker Engine Security chapter of the DCA certification exam. To test your knowledge, it is strongly recommended that you access research questions of all core concepts covered in the coursework and a test to prepare you for the exam. You can also send your feedback to the course developers, whether you have feedback or would like something changed within the course.

Here is a quick quiz to help you assess your knowledge. Leave your answers in the comments below and tag us back. 

Quick Tip – Questions below may include a mix of DOMC and MCQ types.

1. What is a Linux feature that prevents a process within the container from performing filesystem related operations such as altering attributes of certain files? 

[A] Control Groups (CGroups)

[B] Namespaces

[C] Kernel Capabilities

[D] Network Namespaces

2. What flags are used to configure encryption on Docker daemon without any authentication?

[A] tlsverify, tlscert, tlskey

[B] key, cert, tls

[C] tls, tlscert, tlskey

[D] host, key, cert, tls

3. What will happen if the <code>–memory-swap</code> is set to 0?

[A] The container does not have access to swap

[B] The container is allowed to use unlimited swap

[C] The setting is ignored, and the value is treated as unset

4. By default, all containers get the same share of CPU cycles. How to modify the shares?

[A] docker container run --cpu-shares=512 webapp

[B] docker container run --cpuset-cups=512 webapp

[C] docker container run --cpu-quota=512 webapp

[D] docker container run --cpus=512 webapp

5. Limit the container webapp to only use the first CPU or core. Select the right command.

[A] docker container run --cpuset-shares=1 webapp

[B] docker container run --cpus=0 webapp

[C] docker container run --cpuset-cpus=0 webapp

6. Assume that you have 1 CPU; which of the following commands guarantees the container at most 50% of the CPU every second?

[A] docker run -it --cpu-shares=512 ubuntu /bin/bash

[B] docker container run --cpuset-cups=.5 webapp

[C] docker run -it --cpus=".5" ubuntu /bin/bash

[D] docker run -it --cpus=".5" --cpuset-cups=1 ubuntu /bin/bash

7. What is a Linux feature that allows isolation of containers from the Docker host?

[A] Control Groups (CGroups)

[B] Namespaces

[C] Kernel Capabilities


8. By default, a container has no resource constraints.

[A] true

[B] false


By following this study guide till this part of the series, you have prepared yourself to handle all Docker Engine Security questions. Vheck out our

On KodeKloud, you also get a learning path with recommendations, sample questions and tips for clearing the DCA exam. Once you are done with this section, you can head over to the research questions and practice test sections to examine your understanding of Docker Engine Architecture, Setup and Configuration.