Lesson 6 of 9By Simplilearn
Last updated on Sep 28, 202033546Before the inception of Docker, developers predominantly relied on virtual machines. But unfortunately, virtual machines lost their popularity as it was proven to be less efficient. Docker was later introduced and it replaced VMs by allowing developers to solve their issues efficiently and effectively.
In this article, you will learn the following:
Before getting started with what Docker Swarm is, we need to first understand what Docker is as a platform.
First, let’s dive into what Docker is before moving up to what docker swarm is.
Docker is a tool used to automate the deployment of an application as a lightweight container so that the application can work efficiently in different environments.
Docker container is a lightweight software package that consists of the dependencies (code, frameworks, libraries, etc.) required to run an application.
We can use Docker Swarm to make Docker work across multiple nodes, allowing them to share containers with each other. It's an environment where you can have various Docker images running on the same host operating system.
Now, that we have understood what Docker and Docker containers are, let us next look into what Docker swarm is.
Docker Swarm is an orchestration management tool that runs on Docker applications. It helps end-users in creating and deploying a cluster of Docker nodes.
Each node of a Docker Swarm is a Docker daemon, and all Docker daemons interact using the Docker API. Each container within the Swarm can be deployed and accessed by nodes of the same cluster.
There are five critical elements within a doctor environment:
Consider an environment having Docker containers as shown:
If one of the containers fails, we can use the Swarm to correct that failure.
Docker Swarm can reschedule containers on node failures. Swarm node has a backup folder which we can use to restore the data onto a new Swarm.
Some of the most essential features of Docker Swarm are:
Let us extend our learning on what is Docker swarm, let us look into the swarm mode key concepts
Global And Replicated Service
In Swarm, containers are launched using services. A service is a group of containers of the same image that enables the scaling of applications. Before you can deploy a service in Docker Swarm, you must have at least one node deployed.
There are two types of nodes in Docker Swarm:
Consider a situation where a manager node sends out commands to different worker nodes.
The manager node knows the status of the worker nodes in a cluster, and the worker nodes accept tasks sent from the manager node. Every worker node has an agent that reports on the state of the node's tasks to the manager. This way, the manager node can maintain the desired state of the cluster.
The worker nodes communicate with the manager node using API over HTTP. In Docker Swarm, services can be deployed and accessed by any node of the same cluster. While creating a service, you'll have to specify which container image you're going to use. You can set up commands and services to be either global or replicated: a global service will run on every Swarm node, and on a replicated service, the manager node distributes tasks to worker nodes.
Now a question may arise: don't task and service refer to the same thing? The answer is no.
A service is a description of a task or the state, whereas the actual task is the work that needs to be done. Docker enables a user to create services that can start tasks. When you assign a task to a node, it can't be assigned to another node. It is possible to have multiple manager nodes within a Docker Swarm environment, but there will be only one primary manager node that gets elected by other manager nodes.
Therefore, the working of the Swarm can be summarized as follows:
A service is created based on the command-line interface. The API that we connect in our Swarm environment allows us to do orchestration by creating tasks for each service. The task allocation will enable us to allocate work to tasks via their IP address. The dispatcher and scheduler assign and instruct worker nodes to run a task. The Worker node connects to the manager node and checks for new tasks. The final stage is to execute the tasks that have been assigned from the manager node to the worker node.
As we have got a better understanding of what is Docker Swarm, let us next look into the differences between Docker swarm and Kubernetes.
The table below illustrates the differences between Kubernetes vs. Docker Swarm:
Features |
Kubernetes |
Docker Swarm |
Installation |
Complex |
Simple |
Load Balancing |
Manual intervention is required for load balancing |
Automated load balancing |
Scalability |
Scaling and deployment are comparatively slower |
Containers are deployed much faster |
Cluster |
Difficult to set-up |
Easy to set-up |
Container Setup |
Commands like YAML should be rewritten while switching platforms |
A container can be easily deployed to different platforms |
Logging and monitoring |
Consists of built-in tools to manage both processes |
Tools are not required for logging and monitoring |
Availability |
High availability when pods are distributed among the nodes |
Increases availability of applications through redundancy |
Data volumes |
Shared with containers from the same pod |
Can be shared with any container |
To strengthen our understanding of what Docker swarm is, let us look into the demo on the docker swarm.
Learn the core Docker technologies like Docker swarm, containers, docker compose, and more with the Docker Certified Associate (DCA) Certification Training Course. Enroll now!
This tutorial requires two hosts, which can either be Virtual Machine or AWS EC2.
The demo shows how to build and deploy a Docker Engine, run Docker commands, and install Docker Swarm.
Run the following command on the terminal:
sudo apt-get update
Before proceeding, uninstall the old Docker software and use the following command:
sudo apt-get remove docker docker-engine docker.io
To install Docker on Ubuntu, run the following command:
sudo apt install docker.io
Set-up and run Docker service by entering the following commands in the terminal window:
sudo systemctl start docker
sudo systemctl enable docker
To check the installed Docker version, enter the following command:
sudo docker --version
To run a Docker container, it’s important to pull a Docker Image (such as MySQL) from Docker Hub.
sudo docker pull mysql
sudo docker run -d -p0.0.0.0:80:80 mysql:latest
Now, Docker pulls the latest MySQL image from the hub.
List down all the available Docker images on your machine by using the following command:
sudo docker ps -a
Here, create a cluster with the IP address of the manager node.
sudo Docker Swarm init --advertise-addr 192.168.2.151
Subsequently, you should see the following output:
Manager Node
This means that the manager node is successfully configured.
Now, add worker node by copying the command of the “swarm init” and paste the output onto the worker node:
sudo Docker Swarm join --token SWMTKN-1- xxxxx
Your worker node is also created if you see the following output:
Worker Node
Now, go back to the manager node and execute the following command to list the worker node:
sudo docker node ls
Here, you must see the worker node in the following output:
Swarm Cluster - Docker Swarm
The above image shows you have created the Swarm Cluster successfully. Now, launch the service in Swarm Mode.
Go to your the manager node and execute the command below to deploy a service:
sudo docker service create --name HelloWorld alpine ping docker.com
By executing the above command, you can access the HelloWorld file from the remote system.
To see the output, you can check the services with the following command:
sudo docker service ls
Finally, you should be able to see the following output:
Service Created - Docker Swarm
And that’s it! Well done, you have successfully installed and configured the Swarm cluster on Ubuntu 16.04. Also, whenever required, you can effortlessly scale your application with no performance issues.
This brings us to the conclusion of the article what is Docker Swarm. Here, we learned what Docker and Docker Swarm are, along with Swarm mode key concepts and how Docker Swarm works. We also explored Kubernetes vs. Docker Swarm, and why we use Docker Swarm. In the end, we also saw a case study on ‘How to set up Swarm in the Docker ecosystem’. Do you have any questions? Please feel free to put it in the comments section of this article, our experts will get back to you at the earliest.
A broad understanding of container concepts like Docker is one of the most critical skills that a DevOps engineer should have. You can add this credential to your skillset by enrolling in Simplilearn's Docker Certified Associate (DCA) Certification Training Course. Get hands-on experience with Docker Compose and Docker hub, create flexible, customizable environments and networks, and much more with this comprehensive training course using Simpilearn's unique Blended Learning approach.
Simplilearn is one of the world’s leading providers of online training for Digital Marketing, Cloud Computing, Project Management, Data Science, IT, Software Development, and many other emerging technologies.
DevOps Engineer
Docker Certified Associate (DCA) Certification Training Course
DevOps Certification Training
*Lifetime access to high-quality, self-paced e-learning content.
Explore CategoryHow to Become a DevOps Engineer?: Roles, Responsibilities, and Skills Required
DevOps Engineer Resume Guide
What is a DevOps Engineer? Salary, Roles and Responsibilities
What is DevOps: DevOps Core, Working, and Uses Explained
How to Become a Big Data Engineer?
DevOps Career Guide: A Comprehensive Playbook To Becoming A DevOps Engineer