Docker Certified Associate Exam Series (Part-1): Container Orchestration
The Docker Certified Associate (DCA) exam is a certification program offered by Docker designed to validate the skills and knowledge of Docker professionals. The exam tests the candidate's knowledge of Docker fundamentals, Docker Community Edition, Docker Compose, Docker Swarm, and other related technologies. This 7-part series covers all the essential parts of the Docker Certified Associate (DCA) exam.
In this first part of the 7-part blog series, we’ll cover container orchestration in Docker. Here are the other six parts:
- Docker Certified Associate Exam Series (Part-2): Kubernetes
- Docker Certified Associate Exam Series (Part-3): Image Creation, Management, and Registry
- Docker Certified Associate Exam Series (Part-4): Installation and Configuration
- Docker Certified Associate Exam Series (Part-5): Networking
- Docker Certified Associate Exam Series (Part-6): Docker Engine Security
- Docker Certified Associate Exam Series (Part-7): Docker Engine Storage & Volumes
What is Container Orchestration?
Container orchestration refers to the automated management of Docker containers, including deployment, scaling, and networking. It is the process of managing the lifecycle of containers, including scheduling, deployment, scaling, and availability. Container orchestration makes it easier to manage and scale containerized applications.
Try the Docker Network Lab for free
Container orchestration tools help you set up many application instances using a single command. They allow you to deploy and manage complex containerized applications with high availability, fault tolerance, and scalability.
Three of the most popular Container Orchestration tools are Docker Swarm, Kubernetes, and MESOS.
- Docker Swarm is easy to set up, but it has a few drawbacks regarding autoscaling and customizations.
- MESOS is challenging to use and is only recommended for advanced Cloud developers.
- Kubernetes is the most popular container orchestration solution, offering many customization options and unmatched auto-scaling capabilities.
For this part of the study guide series, we shall cover Docker Swarm.
Docker Swarm
Docker Swarm is Docker's native clustering and orchestration tool. It allows Docker users to deploy and manage a cluster of Docker hosts, making it easy to scale and manage containerized applications across multiple hosts. Docker Swarm provides features such as service discovery, load balancing, and rolling updates, enabling users to run and maintain their applications seamlessly.
Below are the Docker Swarm topics you’ll need to learn to pass the DCA exam:
- Swarm Architecture
- Setting up a 2-node cluster in Swarm
- Creating a demo swarm cluster setup
- Basic Swarm Operations
- Swarm High Availability and the Importance of Quorum
- Swam in High Availability Mode
- Auto-lock and a classroom demo
- Swarm Services
- Rolling Updates, Rollbacks, and Scaling
- Swarm Service Types
- Placement in Swarm
- Service in Swarm- Basic Operations
- Service in Swarm- placements, global, parallelism, and replicated
- Docker Config Objects
- The Docker Overlay Network
- MACVLan Networks
- Swarm Service Discovery
- Docker Stack
Let’s explore some of these areas in detail.
Swarm Architecture
As you study for your exam, you should learn the architecture of Docker Swarm.
Docker Swarm lets you integrate different Docker machines into a single cluster. This helps with your application’s load balancing and also improves its availability. A Docker Cluster is made up of different instances called Nodes. Nodes can be categorized into two types: Manager and Worker nodes.
A Manager Node receives instructions from a user and turns them into service tasks, which are then assigned to one or more worker nodes. This node also helps maintain the desired state of the cluster to which it belongs. Managers can be configured to run production workloads, too, when needed.
On the other hand, a Worker Node receives instructions from the manager nodes and uses these instructions to deploy and run the necessary containers.
Features of Docker Swarm Architecture
Some features of Docker Swarm Architecture include:
- Swarm is easy to set up and maintain since all the features of Docker Swarm are embedded in the Docker Engine.
- Docker Swarm deploys applications in a Declarative format.
- The Swarm manager automatically scales and distributes application instances across worker nodes depending on demand.
- Rolling updates reconfigure your application instances one at a time to prevent downtimes.
- Docker Swarm performs desired state reconciliation for self-healing applications.
- SSL/TLS certificates secure communication between nodes via authentication and encryption
- It uses an external load balancer to distribute requests between nodes
Setting up a 2-Node Swarm Cluster
This lesson demonstrates how you can create a Docker Swarm cluster with 2-worker nodes and 1-manager.
Node Swarm Cluster prerequisites
The prerequisites needed for this session will include:
- Machines (Nodes) deployed and designated as Manager, Worker-1, and Worker-2.
- The machines should have the Docker Engine installed. See this guide on how to download and install it.
- Each node should be assigned a static IP address.
- The ports TCP 2377, TCP & UDP 7946, and UDP 4789 should be opened.
Commands to Initialize Docker Swarm
To initialize Docker Swarm on your manager node, use the following command while the manager is active:
docker swarm init
Swarm initialized: current node (whds9866c56gtgq3uf5jmfsip) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join --token SWMTKN-1-19nlqoifkry03y8l5242zl6e2te2k9dvzebf5b70ihhpn7r4qh-aqtxt2sd0sh0hj2f8ceupj53g 172.17.0.27:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
This command initializes Swarm on the selected node, which is now a manager. The command also returns a script you will use to add a worker to this swarm, as indicated on the Command Line Interface.
To add the second worker to this Swarm, use the following command:
docker swarm join-token worker
To add a worker to this swarm, run the following command:
docker swarm join --token SWMTKN-1-19nlqoifkry03y8l5242zl6e2te2k9dvzebf5b70ihhpn7r4qh-aqtxt2sd0sh0hj2f8ceupj53g 172.17.0.27:2377
To display a list of your nodes and workers’ names and status, type the following command onto the CLI:
docker node ls
Swarm Operations
Let's look at some common Swarm operations involving promoting, draining, and deleting nodes.
To promote a node to a manager, run the command:
docker node promote worker1
Node worker1 promoted to a manager in the swarm.
To demote a manager node to a Worker, run the command:
docker node demote worker1
Manager worker1 demoted in the swarm.
When you want to perform upgrades and maintenance on your cluster, you might need to drain each node independently, one at a time. Let us assume that the current state of the cluster has the following nodes, as shown below:
To drain your node, use the command:
docker node update --availability drain worker1
worker1
This command brings down containers on worker1 and runs replica instances on another worker until it gets back up.
Once you are done patching or maintaining your node, you will run the update command with the availability option set to active to bring it back up. See the command below:
docker node update --availability active worker1
worker1
To delete a node from a cluster, drain it so that its workload is redistributed to another node, then run the command:
docker swarm leave
Node left the swarm.
Swarm High Availability- Quorum
Having multiple managers in a cluster helps with fault tolerance. When more than one manager node is running on a cluster, Docker Swarm uses a RAFT algorithm to create distributed consensus.
The RAFT algorithm initiates requests at random times. The first manager to respond then requests other managers in the cluster to make it a Leader. If the managers respond positively, the leader assumes this role, sending notifications and updating a shared database on the state of the cluster.
This database is available to all managers in the cluster. Before the leader makes any changes to a cluster, he sends instructions to other managers. The managers should reach a Quorum and agree before changes are effected. If the leader manager loses connectivity, other managers initiate a process to elect a new leader.
The best practices for high availability in Swarm include:
- Each cluster should have an odd number of managers for easier network segmentation.
- Every decision should be agreed upon when the number of managers reaches a Quorum. The quorum for a cluster with N managers is N/2+1
- The number of failures a cluster can withstand is the Fault Tolerance and is calculated as 2*N -1
- Distribute manager nodes equally over different data centers/availability zones so the cluster can withstand sitewide disruptions.
If more than the allowed number of managers fail, you cannot perform managerial duties on your cluster. The worker nodes will, however, continue to run normally with all the services and configuration settings still active.
To resolve a failed cluster, you could attempt to bring the failed managers back online. If this fails, you can create a new cluster using the force-create command. This will be a healthy, single-manager cluster. The command for force create is:
docker swarm init --force-new-cluster
Once this cluster has been created, you can promote other workers into manager nodes using the promote command.
Swarm Services
In Docker Swarm, services define the tasks (containers) that should run in the cluster and the constraints and policies for these tasks. Services help manage the containers across the swarm and enable the application's scalability and load balancing. You can create, update, and scale services using Docker commands or the Docker API.
To create 3 replicas of your application using the Docker service, run the command:
docker service create --replicas=3 App1
When you deploy your applications, an API server creates a service. The orchestrator then divides the service into tasks. Next, the allocator assigns each task an IP address, and a dispatcher assigns the task to a worker. Lastly, the scheduler manages task handling by the workers.
Some Popular Docker Commands and Its Tasks
Here are a few common service tasks and their Docker Commands.
Task |
Command |
Create an overlay network
|
docker network create --driver overlay my-overlay-network |
Create a subnet
|
docker network create --driver overlay --subnet 10.15.0.0/16 my-overlay-network |
Make a network attachable to external containers
|
docker network create --driver overlay --attachable my-overlay-network |
Enable IPS Encryption
|
docker network create --driver overlay --opt encrypted my-overlay-network |
Attach a service to a network
|
docker service create --network my-overlay-network my-web-service |
Delete a newly created network
|
docker service create --network my-overlay-network my-web-service |
Docker Swarm Network
In Docker Swarm, these default networks are created when the swarm is initialized:
- Ingress network: This network routes ingress traffic to the appropriate swarm services.
- docker_gwbridge network: The containers use this network to communicate with the external network. This network also allows tasks to communicate across nodes using the overlay network driver.
These default networks are created to facilitate communication and traffic routing within a Docker Swarm environment.
Network Ports and their Purposes
Here are some network ports and their purposes.
Port |
Purpose |
|
Cluster Management Communications |
|
Container Network Discovery/ Communication Among Nodes |
|
Overlay Network Traffic |
To publish a host on port 80 pointing to a container on port 5000, use the command:
docker service create -p 80:5000 my-web-server
or,
docker service create --publish published=80, target=5000 my-web-server
To include UDP:
docker service create -p 80:5000/UDP my-web-server
or,
docker service create --publish published=80, target=5000, protocol=UDP my-web-server
Swarm Service Discovery
Containers and services in a node can communicate with each other directly using their names. To make sure that these containers can ‘see’ each other, you should create an overlay network in which you should place the application and the naming service. For instance:
Create an overlay network:
docker network create --driver=overlay app-network
Then, create an API server within this network:
docker service create --name=api-server --replicas=2 --network=app-network api server
Create the web service task:
docker service create --name=web --network=app-network web
The services can now reach each other using their service names. The web server can now reach the api-server using the service name api-server.
Docker Stack
In Docker, a stack is a group of interrelated services that together form an application's functionality. Your application’s configuration settings and changes are stored in a Docker configuration file, known as a Docker Stack or Docker Compose File.
Docker Compose lets you create stack files in YAML, which makes your application easier to manage, highly distributed, and scalable. The docker stack file also lets you perform health checks on your container and set the grace period during which the health check stays inactive. To create a docker stack file, run the command:
docker stack deploy --compose-file docker-compose.yml
Other Docker Stack Commands
Other Docker Stack Commands include:
Task | Command |
Create a Stack | $ docker stack deploy |
List active stacks | $ docker stack ls |
List services created by a stack | $ docker stack ps |
List tasks running in a stack | $ docker stack ps |
Delete a Stack | $ docker stack rm |
Docker Storage
To understand how container orchestration tools manage storage, it is important to know how docker manages storage in containers. Storage in Docker is implemented using Storage Drivers and Volume Drivers.
Docker uses Storage Drivers to enable layered architecture. These are attached to containers by default and point file systems to the default path: /var/lib/docker where it stores files in subfolders such as aufs, containers, image and volumes.
Popular storage drivers include AUFS, ZFS, BDRFS, Device Mapper, Overlay and Overlay2.
Volume Driver Plugins in Docker
Volume Drivers help create persistent volumes in Docker. By default, volumes are assigned a local driver that stores data on the host’s volume directory.
Some third-party volume driver plugins help with storage on various public cloud platforms. These include Azure File Storage, Convoy, DigitalOcean BlockStorage, Flocker, gce-docker, GlusterFS, NetApp, RexRay, Portworx, and VMware vSphere Storage, among others. Depending on the operating system and application needs, Docker automatically assigns the appropriate volume driver.
To create a volume on Amazon AWS ElasticBlockStorage, run the command:
docker run -it \
--name app \
--volume driver rexray/ebs \
--mount src=ebs -vol,target=/var/lib/app1 \
app1
This command creates a persistent volume storage on Amazon EBS at app1’s default file location.
You can now proceed to the next part of this series: Docker Certified Associate Exam Series (Part-2): Kubernetes.
Sample Questions:
Here is a quick quiz to help you assess your knowledge. Leave your answers in the comments below and tag us back.
Quick Tip – Questions below may include a mix of DOMC and MCQ types.
1. Which command can be used to remove a kubeapp
stack?
[A] docker stack deploy kubeapp
[B] docker stack ls kubeapp
[C] docker stack services kubeapp
[D] docker stack rm kubeapp
2. Which command can be used to promote worker2 to a manager node? Select the right answer.
[A] docker promote node worker2
[B] docker node promote worker2
[C] docker swarm node promote worker2
[D] docker swarm promote node worker2
3. What is the command to list the stacks in the Docker host?
[A] docker stack deploy
[B] docker stack ls
[C] docker stack services
[D] docker stack ps
4. What is the maximum number of managers possible in a swarm cluster?
[A] 3
[B] 5
[C] 7
[D] No limit
5. … are one or more instances of a single application that runs across the Swarm Cluster.
[A] docker stack
[B] services
[C] pods
[D] None of the above
Conclusion
Once you have understood Swarm architecture, set up a cluster, and ensured high availability, you will have developed enough familiarity to tackle real-world projects. Swarm services automate the interaction between various nodes to help with load balancing for distributed containers.
At KodeKloud, we have a highly-rated DCA exam preparation course. This course covers all the required topics from the DCA curriculum.