Docker Certified Associate Exam Series (Part-1): Container Orchestration

The Docker Certified Associate (DCA) exam is a certification program offered by Docker, which is designed to validate the skills and knowledge of Docker professionals. The exam tests the candidate's knowledge of Docker fundamentals, Docker Community Edition, Docker Compose, Docker Swarm, and other related technologies. This 7-part series covers all the essential parts of the Docker Certified Associate (DCA) exam.

In this first part of the 7-part blog series, we’ll cover container orchestration in Docker. Here are the six other parts:

What is Container Orchestration?

Container orchestration refers to the automated management of Docker containers, including deployment, scaling, and networking. It is the process of managing the lifecycle of containers, including scheduling, deployment, scaling, and availability. Container orchestration ensures that containers are deployed and managed in a consistent and reliable manner, making it easier to manage and scale containerized applications.

Try the Docker Network Lab for free

Docker Network Lab
Docker Network Lab

Deploying in Docker typically involves running various applications on different hosts. Container orchestration helps you set up a large number of application instances using a single command. Simply put, container orchestration tools make it easy to deploy and manage complex containerized applications, ensuring high availability, fault tolerance, and scalability.

Three of the most popular Container Orchestration tools are Docker SwarmKubernetes, and MESOS

  • Docker Swarm is hugely popular and easy to set up but has a few drawbacks when it comes to autoscaling and customizations. 
  • MESOS is challenging to use and is only recommended for advanced Cloud developers. 
  • Kubernetes is a popular container orchestration solution that offers plenty of customization options and unmatched auto-scaling capabilities. 

For this part of the study guide series, we shall cover Docker Swarm.

Welcome to KodeKloud!

We are the #1 DevOps courses provider. Register today to gain access to the richest collection of DevOps courses and labs and try sample lessons of all our courses.

No credit card required!

START FREE!

Docker Swarm

Docker Swarm helps you run applications on the Docker Engine seamlessly through multiple containers that reside in the same nodes. With Docker Swarm, you can easily monitor your containers' state, health, and performance and the hosts that run your applications. 

As you prepare for the DCA exam, some of the Docker Swarm topics that you’ll need an in-depth understanding of include:

  • Swarm Architecture
  • Setting up a 2-node cluster in Swarm
  • Creating a demo swarm cluster setup
  • Basic Swarm Operations
  • Swarm High Availability and the Importance of Quorum
  • Swam in High Availability Mode
  • Auto-lock and a classroom demo 
  • Swarm Services
  • Rolling Updates, Rollbacks, and Scaling
  • Swarm Service Types
  • Placement in Swarm
  • Service in Swarm- Basic Operations
  • Service in Swarm- placements, global, parallelism, and replicated
  • Docker Config Objects
  • The Docker Overlay Network
  • MACVLan Networks
  • Swarm Service Discovery
  • Docker Stack

Let’s explore some of these areas in detail.

Swarm Architecture

As you study for your exam, you should develop a high level of familiarity with the structure and architecture of Docker Swarm.

Docker Swarm lets you integrate different Docker machines into a single cluster. This helps with your application’s load balancing and also improves its availability. A Docker Cluster is made up of different instances called Nodes. Nodes can be categorized into two types: Manager and Worker nodes.

Manager Node receives instructions from a user and turns these into service tasks, which are then assigned to one or more worker nodes. Such nodes also help maintain the desired state of the cluster to which it belongs. Managers can be configured to run production workloads, too, when needed.

On the other hand, a Worker Node receives instructions from the manager nodes and uses these instructions to deploy and run the necessary containers. 

Features of Docker Swarm Architecture

Some features of Docker Swarm Architecture include:

  • Swarm is easy to set up and maintain since all the features of Docker Swarm are embedded in the Docker Engine.
  • Docker Swarm deploys applications in a Declarative format.
  • The Swarm manager automatically scales and distributes application instances across worker nodes depending on demand.
  • Rolling updates reconfigure your application one at a time for easier change management.
  • Docker Swarm performs desired state reconciliation for self-healing applications.
  • SSL/TLS certificates secure communication between nodes via authentication and encryption
  • It uses an external load balancer to distribute requests between nodes

Setting up a 2-Node Swarm Cluster

This lesson is a demonstration of how you can create a Docker Swarm cluster with 2-worker nodes and 1-manager.

Node Swarm Cluster prerequisites

The prerequisites needed for this session will include:

  • Machines (Nodes) deployed and designated as Manager, Worker-1, and Worker-2.
  • The machines should have the Docker Engine installed.
  • Each node should be assigned a static IP address.
  • The ports TCP 2377, TCP & UDP 7946 and UDP 4789 should be opened.

Commands to Initialize Docker Swarm

To initialize Docker Swarm on your manager node, use the following command while the manager is active:

docker swarm init

Swarm initialized: current node (whds9866c56gtgq3uf5jmfsip) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-19nlqoifkry03y8l5242zl6e2te2k9dvzebf5b70ihhpn7r4qh-aqtxt2sd0sh0hj2f8ceupj53g 172.17.0.27:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

This command initializes Swarm on the selected node, which is now a manager. The command also returns a script you will use to add a worker to this swarm, as indicated on the Command Line Interface. 

To add the second worker to this Swarm, use the following command:

docker swarm join-token worker

To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-19nlqoifkry03y8l5242zl6e2te2k9dvzebf5b70ihhpn7r4qh-aqtxt2sd0sh0hj2f8ceupj53g 172.17.0.27:2377

To display a list of your nodes and workers’ names and status, type the following command onto the CLI:

docker node ls

Swarm Operations

Let's look at some common Swarm operations involving promoting, draining, and deleting nodes. 

To promote a node to a manager, run the command:

docker node promote worker1
Node worker1 promoted to a manager in the swarm.

To demote a manager node to a Worker, run the command:

docker node demote worker1
Manager worker1 demoted in the swarm.

When you want to perform upgrades and maintenance on your cluster, you might need to drain each node independently, one at a time. Let us assume that the current state of the cluster has the following nodes, as shown below: 

To drain your node, use the command:

docker node update --availability drain worker1
worker1

This command brings down containers on worker1 and runs replica instances on another worker until it gets back up.

Once you are done patching or maintaining your node, you will run the update command but with the availability being active to bring it back up. The command to use is:

docker node update --availability active worker1
worker1

To delete a node from a cluster, drain it so that its workload is redistributed to another node, then run the command:

docker swarm leave
Node left the swarm.

Swarm High Availability- Quorum

Docker Swarm uses a RAFT algorithm to create distributed consensus when more than one manager node is running on a cluster. Having multiple managers in a cluster helps with fault tolerance. 

The RAFT algorithm initiates requests at random times. The first manager to respond then requests other managers in the cluster to make it a Leader. If the managers respond positively, the leader assumes this role, sending notifications and updating a shared database on the state of the cluster. This database is available to all managers in the cluster. Before the leader makes any changes to a cluster, he sends instructions to other managers. The managers should reach a Quorum and agree before changes are effected. If the leader manager loses connectivity, other managers initiate a process to elect a new leader. 

The best practices for high availability in Swarm include:

  • Each cluster should have an odd number of managers for easier network segmentation.
  • Every decision should be agreed upon when the number of managers reaches a Quorum. The quorum for a cluster with N managers is N/2+1
  • The number of failures a cluster can withstand is the Fault Tolerance and is calculated as 2*N -1
  • Distribute manager nodes equally over different data centers/availability zones so the cluster can withstand sitewide disruptions.

If more than the allowed number of managers fail, you cannot perform managerial duties on your cluster. The worker nodes will, however, continue to run normally with all the services and configuration settings still active.

To resolve a failed cluster, you could attempt to bring the failed managers back online. If this fails, you can create a new cluster using the force-create command. This will be a healthy, single-manager cluster. The command for force create is:

docker swarm init --force-new-cluster

Once this cluster has been created, you can promote other workers into manager nodes using the promote command.

Swarm Services

As you begin to deploy your clusters, you’ll need a way to run multiple instances of your application across several worker nodes to help with automation and load balancing. The Docker Service allows you to launch containers in a coordinated manner across several nodes. 

To create 3 replicas of your application using the Docker service, run the command:

docker service create --replicas=3 App1

When you deploy your applications, an API server creates a service which is then divided into tasks by the orchestrator. The allocator then assigns each task an IP address, and a dispatcher assigns these tasks to individual workers. The scheduler then manages task handling by the workers.

Here are a few common service tasks and their Docker Commands.

TaskCommand
Create an overlay network$ docker network create --driver overlay my-overlay-network
Create a subnet$ docker network create --driver overlay --subnet 10.15.0.0/16 my-overlay-network 
Make a network attachable to external containers$ docker network create --driver overlay --attachable my-overlay-network
Enable IPS Encryption$ docker network create --driver overlay --opt encrypted my-overlay-network
Attach a service to a network$ docker service create --network my-overlay-network my-web-service
Delete a newly created network$ docker service create --network my-overlay-network my-web-service
Delete all networks$ docker network prune

Network Ports and their Purposes

Here are some network ports and their purposes.

PortPurpose
TCP 2377Cluster Management Communications
TCP/UDPContainer Network Discovery/ Communication Among Nodes
UDP 4789Overlay Network Traffic

To publish a host on port 80 pointing to a container on port 5000, use the command:

docker service create -p 80:5000 my-web-server 

or

docker service create --publish published=80, target=5000 my-web-server

To include UDP:

docker service create -p 80:5000/UDP my-web-server 

or

docker service create --publish published=80, target=5000, protocol=UDP my-web-server

Swarm Service Discovery

Containers and services in a node can communicate with each other directly using their names. To make sure that these containers can ‘see’ each other, you should create an overlay network in which you should place the application and the naming service. For instance:

Create an overlay network:

docker network create --driver=overlay app-network

Then create an API server within this network:

docker service create --name=api-server --replicas=2 --network=app-network api server

Create the web service task:

docker service create --name=web --network=app-network web

The services can now reach each other using their service names. The web server can now reach the api-server using the service name api-server.

Docker Stack

In Docker, a stack is a group of interrelated services that together form the functionality of an application. All of your application’s configuration settings and changes are stored in a docker configuration file, known as a Docker Stack or Docker Compose File. 

Docker Compose lets you create stack files in YAML, which makes your application easier to manage, highly distributed, and scalable. The docker stack file also lets you perform health checks on your container and set the grace period during which the health check stays inactive. To create a docker stack file, run the command:

docker stack deploy --compose-file docker-compose.yml

Other Docker Stack Commands

Other Docker Stack Commands include:

TaskCommand
Create a Stack$ docker stack deploy
List active stacks$ docker stack ls
List services created by a stack$ docker stack ps
List tasks running in a stack$ docker stack ps
Delete a Stack$ docker stack rm

Docker Storage

To understand how container orchestration tools manage storage, it is important to know how docker manages storage in containers. Getting to know storage in docker will go a long way in helping you manage storage with Kubernetes. Storage in Docker is managed by two techniques: Storage Drivers and Volume Drivers

Docker uses Storage Drivers to enable layered architecture. These are attached to containers by default and point file systems to the default path: /var/lib/docker where it stores files in subfolders such as aufs, containers, image and volumes.

Popular storage drivers include AUFS, ZFS, BDRFS, Device Mapper, Overlay and Overlay2.

Volume Driver Plugins in Docker

Volume Drivers help create persistent volumes in Docker. By default, volumes are assigned a local driver that stores data on the host’s volume directory.

Some third-party volume driver plugins help with storage on various public cloud platforms. These include Azure File Storage, Convoy, DigitalOcean BlockStorage, Flocker, gce-docker, GlusterFS, NetApp, RexRay, Portworx, and VMware vSphere Storage, among others. Docker automatically assigns the appropriate volume driver depending on the operating system and application needs. 

To create a volume on Amazon AWS ElasticBlockStorage, run the command:

docker run -it \
--name app1
 --volume driver rexray/ebs
 --mount src=ebs -vol,target=/var/lib/app1
 app1

This command creates a persistent volume storage on Amazon EBS at app1’s default file location.

You can now proceed to the next part of this series: Docker Certified Associate Exam Series (Part-2): Kubernetes

Sample Questions:

Here is a quick quiz to help you assess your knowledge. Leave your answers in the comments below and tag us back. 

Quick Tip – Questions below may include a mix of DOMC and MCQ types.

1. Which command can be used to remove a kubeapp stack?

[A] docker stack deploy kubeapp

[B] docker stack ls kubeapp

[C] docker stack services kubeapp

[D] docker stack rm kubeapp

2. Which command can be used to promote worker2 to a manager node? Select the right answer.

[A] docker promote node worker2

[B] docker node promote worker2

[C] docker swarm node promote worker2

[D] docker swarm promote node worker2

3. What is the command to list the stacks in the Docker host?

[A] docker stack deploy

[B] docker stack ls

[C] docker stack services

[D] docker stack ps

4. What is the maximum number of managers possible in a swarm cluster?

[A] 3

[B] 5

[C] 7

[D] No limit

5. …  are one or more instances of a single application that runs across the Swarm Cluster.

[A] docker stack

[B] services

[C] pods

[D] None of the above

Conclusion

Once you have understood Swarm architecture, set up a cluster, and ensured high availability, you will have developed enough familiarity to tackle real-world projects. Swarm services will help you automate the interaction between various nodes to help with load balancing for distributed containers.

At KodeKloud, we have a highly-rated DCA exam preparation course. This course covers all the required topics from the DCA curriculum, including Docker Engine, Docker Compose, setting up a Docker Swarm cluster, Docker Enterprise products – such as Docker Enterprise Engine (now Mirantis Runtime Engine), Universal Control Plane (now Mirantis Kubernetes Engine) and Docker Trusted Registry now (Mirantis Secure Registry). And finally, the fundamentals of Kubernetes.

Docker Certified Associate Exam Course | KodeKloud
Prepare for the Docker Certified Associate Exam Course