The concept of Containerization has entirely changed the way an application is built, developed, packaged, monitored, tested, and deployed in the production environment. Before the introduction to Container technologies, organizations and businesses used to spend tons of resources in terms of money, time, and man-power to set up virtual machines or physical servers and deploy applications or components of applications on them.
This process is secure and does the job, however, the entire process is quite hectic. Developers need to go through the hassles of setting up virtual machines, creating environments, and installing packages, binaries, etc. that are needed to run the applications. Moreover, it was difficult to share the applications with other developers using virtual machines. It took up lots of resources since virtual machines sit on top of the hardware of the underlying infrastructure. Thus, a more efficient and quicker way was required to change the entire development lifecycle. And that is exactly what happened.
With the introduction of container technologies, developers are now able to create packaged, isolated, portable, and containerized environments to build, develop, test, and deploy their applications. These environments contain all the binaries, packages, system files, and other files that are needed to run the application. Moreover, since containers sit on top of the OS of the underlying infrastructure, you can easily run multiple containers on the same OS, without one affecting the processes of the others. Thus, developing a microservice architecture is as simple as creating multiple clusters of containers either on the same machine or different hosts. Also, sharing applications hosted on containers, and performing updates on the applications have become easier.
Docker is the leading open-source container service provider that has ruled the entire market since its inception. It’s easy to learn and makes the development and deployment of applications a piece of cake. However, when a beginner starts learning Docker containerization, it seems to be somewhat confusing because of certain terminologies that they come across such as containers, images, volumes, docker hub, docker-engine, registries, etc. In fact, if a beginner gets hold of two of the most important artifacts of Docker containerization - Docker Images and Docker Containers, then it’s easier for him/her to ace through the entire concept.
This comprehensive guide will walk you through the entire concept of Docker Images. It will explore what they are, how they are created, their architecture, how to build Docker Images, pull them from registries, list them, and several other operations.
Please ensure that you have Docker installed on your system to keep up with the examples mentioned in this tutorial. Here’s a tutorial on how to install Docker on Windows. Although it’s not necessary, it would be easier for you if you have basic knowledge about Docker Containers before you move ahead. You can refer to this article on Docker Containers for the same. So without any further ado, let’s get started.
What Are Docker Images?
Before you start getting into the details of Docker Images, take a close look at the diagram above. On the left of the diagram, you can see a typical Docker Image with multiple read-only layers. And on the right, you can see the creation of multiple runtime instances of the same Docker Images. These instances are called Docker Containers. They are writable layers. The Docker Image is a typical web server image that uses a base image pulled directly from the official Docker registry called Dockerhub. And over that, there are intermediate layers where an Nginx web server image is also available on Dockerhub, along with the installed web application with all its dependencies and libraries. Don’t worry if this does not make any sense to you. Go through the entire description in a detailed manner, for better clarity.
To start with, Docker Images can be considered as the blueprint of the entire application environment that you create. These images are read-only which means that you can’t make changes and hence, they are immutable. However, on top of these images, when you create containers associated with them, they become writable.
Now, typically there are two categories of Docker Images - Official Base Images that are pre-built and can be downloaded or pulled from registries, and Customized Images that use base images to create application-specific environments.
Docker Images are also referred to as snapshots because they are immutable. It contains the definitions of all the libraries, binaries, configuration files, etc. that one would require to run the application. Let’s understand this with the help of an example.
Imagine you want to create a Docker Image and Container to host a web application. To do so, you need to follow a process. But before going into the process, let’s clear out a few important terms.
- Dockerfiles - These are files that are used to build a customized Docker Image. These contain step-by-step instructions such as one for pulling a base image, the other for running installation commands, etc. Each instruction creates an intermediate Image layer and uses the information from previous layers as well.
- Docker Volumes - These are solutions for persistent data storage problems. When you delete a Docker Image or Container, you might still need some data associated with them. What you can do is mount certain directories in the host system to these images while creating them, and access these directories inside the containers that you run associated with the Docker Images.
- Docker Registries - These are repositories just like GitHub, which contains pre-built Docker Images such as MySQL, Ubuntu, CentOs, Nginx, etc. The official Docker Registry is the Dockerhub. You can pull images directly from these repositories using the Docker pull command.
To create a Docker Image to host a web application, you must use the Nginx server to serve the pages. First, you will have to create a dockerfile that will contain instructions to build the Image. Next, pull the Ubuntu Docker Base Image from the Dockerhub by specifying a “FROM Ubuntu'' instruction as the first line. This means that when you create a container after building the image, you will be able to access an Ubuntu OS and work inside it using the Command-Line. Next, you must install Nginx Web Server using the “RUN <command>” instruction. You can specify any commands here. This means that as soon as the container is created, the command will get executed. This is typically used to update the OS, install packages, etc. You must install Nginx using this command the same way as you would install packages in your Linux Machine.
Then, use the “COPY <source> <destination>” command to copy the website files from our host machine to the container. Our final Dockerfile would look like this -
This article has used the expose instruction to expose port 80 of the container to our host machine. The final CMD command will start the Nginx server.
You can even simplify this dockerfile by directly using the Nginx base image from Dockerhub.
Here you can see how to use the Dockerfiles to specify instructions to create a customized Docker Image. This image uses a base image and builds several intermediate layers on top of it which are created by each subsequent instruction. Please note that Docker Images are just templates that you can use to create application development environments called containers. You will soon look at how to build this Image to create containers later on. But before that, let’s look at another way using which you can get access to Docker Images.
Docker Image Pull Command
You can use the Docker Pull Command to pull pre-built Docker Images from the Docker registries such as Dockerhub. Please note that to pull Docker images, you must be logged in to your Dockerhub account through the command line. Let’s try to pull an Ubuntu Image from Dockerhub using the following command.
$ docker pull ubuntu:latest
Here, you have seen a tag called latest, which tells the daemon to pull the latest version of the Ubuntu Image. You can specify any legit tag or version. If you don’t specify any tag, by default it will use the latest tag. Also, before pulling the image, the daemon checks whether a similar image already exists in your system or not. If not, it will pull the image from the registry.
You can see that the command has successfully pulled the Image from the Dockerhub registry. Let’s see how to list Docker Images.
List Docker Images
Now that you have pulled a Docker Image from Dockerhub, you can verify the same by listing all the Docker Images that are available in the local machine. To do so, you can use the following Docker Images command.
$ docker images
Using this command, you will be able to enlist all the images along with details such as the repository that they belong to, their associated tags, Image IDs, date of creation, and their respective sizes.
Building Docker Images
If you have used Dockerfiles to create custom Docker Images as depicted in the example, you can use the Docker build command to build images from Dockerfiles. You already have the dockerfile from the previous example. Let’s try to build that image. Please note that the dockerfile should be named as it is without any extension. The general syntax of the docker build command is -
$ docker build [OPTIONS] PATH | URL | -
It will take several options: no-cache, tag, remove, etc. You can build images either from a dockerfile or a GitHub URL, etc.
For this example, use the following command.
$ docker build -t webserver:latest .
In the above command, you have seen the -t option to specify a tag to the image. After that, it has written the image name and the tag name separated by a colon. Then, there is a need to specify the directory of the build context. The build context contains all the files that are required to build the image. Before running this command, make sure that the directory structure should look like this where you execute the command.
Now, let’s execute the docker build command inside this directory.
It will try to execute all the instructions step-by-step.
After you are done, our image will be successfully built. You can use the Docker Images command to verify the build.
You can see that our image has been created. The next step is to create a container associated with this image.
Docker Run Command
Here, you will look at two examples. It will begin with creating a container for the Ubuntu image that you pulled before using the Docker Pull Command. Now, you will use the Docker run command to run a container for this image. Let’s first see the general syntax of the Docker run command.
$ docker run [OPTIONS] IMAGE[:TAG|@DIGEST] [COMMAND] [ARG...]
Along with the Docker run command, you can use options such as rm (remove after exit), -d (run in detached mode), -i (run in interactive mode), -t (use a pseudo-TTY), etc. You then need to specify the name of the image as well as tags (if any) for the image that you want to create a container for. Then there is the need to specify any command that you want to execute as soon as the container gets started. Let’s try to run an Ubuntu container.
$ docker run -it ubuntu:latest bash
Here, you have specified ‘i’ and ‘t’ options to run the container in interactive mode so that you can input your commands and you have used the bash command at the end to start a bash as soon as the container runs.
You can see that you now have access to an Ubuntu environment and can interact with it through its bash. Now, use the Docker run command to create a container for your webserver image that was built using the dockerfile. The content of the dockerfile is -
$ docker run -it --rm -d -p 8080:80 --name=myweb webserver:latest
In the above command, you have used the --rm option to automatically remove the container once you exit. You must use the -p option to publish port 8080 of the container to the post 80 of our local machine. You also have specified a name to the container using the --name option.
Now, if you navigate to localhost:8080 through your browser, you will find that the Html page that you hosted on the Nginx server, has been rendered.
Features of Docker Images
Let’s discuss a few highlighting features of Docker Images that will help you understand them better.
- Docker Images are simply templates consisting of read-only layers called intermediate layers that are a result of instructions specified inside the corresponding Dockerfiles.
- You can create container instances for these read-only Docker Images which are writable and use them to modify the Images, commit the changes to build new and customized Images.
- Also, you can download pre-built Images from Docker registries such as Dockerhub using the docker pull command.
- You can push your own Docker Images to private registries, share them with our colleagues. It is also possible to backup Docker Images by converting them into tarball files using the docker save command and load them back as images using the Docker load command.
- Docker Images are very small in size, typically a few megabytes.
Advantages of Docker Images
The following are the advantages or benefits of Docker Images that make them immensely popular among the developer communities.
- Docker Images are highly portable. Even though they contain information about dependencies, libraries, environment files, etc., if you share them with others, they will run the same way in any platform as they run in yours. Please note that Docker Containers are not portable. This is so because when you create Docker containers and make changes inside them, the changes are lost once you exit the container. If you want to share these changes, you will need to first commit these changes using the Docker Commit command to create a new Docker image and then share this image.
- They are extremely lightweight. This is so because they consist of multiple layers and each layer simply includes just the difference from the layers preceding them.
- Docker Images are consistent. This is so because they are immutable. This characteristic of Docker Images is useful when you want to perform testing on the application while making sure that the environment does not break.
- It makes sharing applications very easy as you only need to push the images to repositories and share them with your teammates. You can also convert them to compressed tarball files.
- Docker Images are highly secured because they have a hash value associated with them and can even be digitally signed to prevent unauthorized access.
Enroll for the Docker Certified Associate Training Course to learn the core Docker technologies like the Docker Containers, Docker Compose, and more.
You have looked into how Docker Images are one of the core components of the entire Docker Containerization concept. In fact, everything starts with Docker Images. They define the blueprint of the application development and deployment environments. They are just like templates and can be reused multiple times as base images to create custom Docker Images.
We certainly hope that this comprehensive guide provides you with every detail that you will need to know to get your hands dirty with Docker Images. You can go through our complete tutorial on Docker for Beginners.
Also, check out our certified training courses on Docker and DevOps which you can leverage to skill yourself up and get industry-level certifications on the go. Below are the most popular ones.
- Docker Certified Associate (DCA) Certification Training Course
- DevOps Engineer
- Post Graduate Program in DevOps
If you have any questions regarding this article, leave them in the comments section. Our industry experts will get back to you on the same, soon. Happy Learning!