Lesson 7 of 12By Simplilearn
Last updated on Mar 10, 20217414In just the past few years, Docker’s popularity has greatly increased. The reason? It has changed the way software development happens. Docker’s containers allow for the immense economy of scale and have made development scalable, while at the same time keeping the process user-friendly.
In this getting started with docker tutorial, you will learn:
Now before we jump right into the getting started with docker tutorial, you must first know the difference between Docker and virtual machines. So, let’s begin.
In the image, you’ll notice some major differences, including:
Now that you know the differences between virtual machines and Docker, let begin this getting started with docker tutorial by understanding what Docker actually is.
When going through this getting started with docker tutorial, we need to first understand about Docker. Docker is an OS virtualized software platform that allows IT organizations to easily create, deploy, and run applications in Docker containers, which have all the dependencies within them. The container itself is really just a very lightweight package that has all the instructions and dependencies—such as frameworks, libraries, and bins—within it.
The container itself can be moved from environment to environment very easily. In a DevOps life cycle, the area where Docker really shines is deployment, because when you deploy your solution, you want to be able to guarantee that the code that has been tested will actually work in the production environment. In addition to that, when you’re building and testing the code, having a container running the solution at those stages is also beneficial because you can validate your work in the same environment used for production.
You can use Docker in multiple stages of your DevOps cycle, but it is especially valuable in the deployment stage. Next up in this getting started with docker tutorial is the advantages of Docker.
Next in the getting started with docker tutorial we focus on the advantages of Docker. As noted previously, you can do rapid deployment using Docker. The environment itself is highly portable and was designed with efficiencies that allow you to run multiple Docker containers in a single environment, unlike traditional virtual machine environments.
The configuration itself can be scripted through a language called YAML, which allows you to describe the Docker environment you want to create. This, in turn, allows you to scale your environment quickly. But probably the most critical advantage these days is security.
You have to ensure that the environment you’re running is highly secure yet highly scalable, and Docker takes security very seriously. You’ll see it as one of the key components of the agile architecture of the system you’re implementing.
Now that you know the advantages of Docker, the next thing you need to know in this getting started with docker tutorial is how it works and its components.
Docker works via a Docker engine that is composed of two key elements: a server and a client; and the communication between the two is via REST API. The server communicates the instructions to the client. On older Windows and Mac systems, you can take advantage of the Docker toolbox, which allows you to control the Docker engine using Compose and Kitematic.
Now that we have learned about Docker, it's advantages, and how it works, our next focus in this getting started with docker tutorial is to learn the various components of Docker.
There are four components that we will discuss in this getting started with docker tutorial:
This is a command-line-instructed solution in which you would use the terminal on your Mac or Linux system to issue commands from the Docker client to the Docker daemon. The communication between the Docker client and the Docker host is via a REST API. You can issue similar commands, such as a Docker Pull command, which would send an instruction to the daemon and perform the operation by interacting with other components (image, container, registry). The Docker daemon itself is actually a server that interacts with the operating system and performs services. As you’d imagine, the Docker daemon constantly listens across the REST API to see if it needs to perform any specific requests. If you want to trigger and start the whole process, you’ll need to use the Dockered command within your Docker daemon, which will start all of your performances. Then you have a Docker host, which lets you run the Docker daemon and registry.
Now let’s talk about the actual structure of a Docker image in this getting started with docker tutorial. A Docker image is a template that contains instructions for the Docker container. That template is written in a language called YAML, which stands for Yet Another Markup Language.
Next in the getting started with docker tutorial, we will learn all about Docker Image. The Docker image is built within the YAML file and then hosted as a file in the Docker registry. The image has several key layers, and each layer depends on the layer below it. Image layers are created by executing each command in the Dockerfile and are in the read-only format. You start with your base layer, which will typically have your base image and your base operating system, and then you will have a layer of dependencies above that. These then comprise the instructions in a read-only file that would become your Dockerfile.
Here we have four layers of instructions: From, Pull, Run and CMD. What does it actually look like? The From command creates a layer based on Ubuntu, and then we add files from the Docker repository to the base command of that base layer.
In this instance, the command is to run Python. One of the things that will happen as we set up multiple containers is that each new container adds a new layer with new images within the Docker environment. Each container is completely separate from the other containers within the Docker environment, so you can create your own separate read-write instructions within each layer. What’s interesting is that if you delete a layer, the layer above it will also get deleted.
What happens when you pull in a layer but something changes in the core image? Interestingly, the main image itself cannot be modified. Once you’ve copied the image, you can modify it locally. You can never modify the actual base image.
Next in the getting started with docker tutorial, we will learn all about Docker Registry. The Docker registry is where you would host various types of images and where you would distribute the images from. The repository itself is just a collection of Docker images, which are built on instructions written in YAML and are very easily stored and shared. You can give the Docker images name tags so that it’s easy for people to find and share them within the Docker registry. One way to start managing a registry is to use the publicly accessible Docker hub registry, which is available to anybody. You can also create your own registry for your own use internally.
The registry that you create internally can have both public and private images that you create. The commands you would use to connect the registry are Push and Pull. Use the Push command to push a new container environment you’ve created from your local manager node to the Docker registry, and use a PullL command to retrieve new clients (Docker image) created from the Docker registry. Again, a Pull command pulls and retrieves a Docker image from the Docker registry, and a Push command allows you to take a new command that you’ve created and push it to the registry, whether it’s Docker hub or your own private registry.
Next in the getting started with docker tutorial, we will learn all about Docker Container. The Docker container is an executable package of applications and its dependencies bundled together; it gives all the instructions for the solution you’re looking to run. It’s really lightweight due to the built-in structural redundancy. The container is also inherently portable. Another benefit is that it runs completely in isolation. Even if you are running a container, it’s guaranteed not to be impacted by any host OS securities or unique setups, unlike with a virtual machine or a non containerized environment. The memory for a Docker environment can be shared across multiple containers, which is really useful, especially when you have a virtual machine that has a defined amount of memory for each environment.
The container is built using Docker images, and the command to run those images is Run. Let’s go through the basic steps of running a Docker image in this getting started with docker tutorial.
Consider a basic example of Docker run command for starting a single container called redis:
$ Docker run redis
If you don’t have the Redis image locally installed, it will be pulled from the registry. After this, the new Docker container Redis will be available within your environment so you can start using it.
Now let’s look at why containers are so lightweight. It’s because they do not have some of the additional layers that virtual machines do. The biggest layer Docker doesn’t have is the hypervisor, and it doesn’t need to run on a host operating system.
Now that you know the basic Docker components, let’s now look into advanced Docker components in this getting started with docker tutorial.
After going through the various components of Docker, the next focus of this Docker tutorial are the advanced components of Docker:
Docker compose is designed for running multiple containers as a single service. It does so by running each container in isolation but allowing the containers to interact with one another. As noted earlier, you would write the compose environments using YAML.
So in what situations might you use Docker compose? An example would be if you are running an Apache server with a single database and you need to create additional containers to run additional services without having to start each one separately. you would write a set of files using Docker compose to do that.
Docker swarm is a service for containers that allows IT administrators and developers to create and manage a cluster of swarm nodes within the Docker platform. Each node of Docker swarm is a Docker daemon, and all Docker daemons interact using the Docker API. A swarm consists of two types of nodes: a manager node and a worker node. A manager node maintains cluster management tasks. Worker nodes receive and execute tasks from the manager node.
After having looked into all the components of Docker, let us advance our learning in this getting started with docker tutorial on the Docker commands and use case.
To see some of the basic Docker commands and a live coding round, refer to the getting started with docker tutorial.
Self-evaluate your knowledge on Docker with these Docker Certified Associate Exam Dumps. Try answering now!
While this getting started with docker tutorial is just an overview, there are a great many uses for Docker, and it is highly valuable in DevOps today. To learn more on Docker or get a comprehensive Docker tutorial, check out our free resources and our Docker Certified Associate (DCA) Course.
Simplilearn is one of the world’s leading providers of online training for Digital Marketing, Cloud Computing, Project Management, Data Science, IT, Software Development, and many other emerging technologies.
What is Docker Swarm: Features and Working
Getting Started with Microsoft Azure
Top 10 Docker Alternatives for Containerization and Their Standout Features
Introduction To Docker Networking: Advantages and Working
Getting Started With Kubernetes
Getting Started With Azure DevOps