How to Install Docker on Ubuntu: A Step-By-Step Guide

Docker is the modern platform for high-velocity innovation—a tool that is used to automate the deployment of applications in lightweight containers so that applications can work efficiently in different environments. 

A few quick notes about Docker:docker-installation-on-ubuntu-article

  • Multiple containers run on the same hardware
  • Maintains isolated applications
  • Enables high productivity
  • Quick and easy to configure

Before learning about this technology, the first step is to install it. In this article, you’ll learn how to install Docker on Ubuntu.

Prerequisites

Operating System Requirements for Docker Installation on Ubuntu

To set up Docker Engine, you'll need to use the 64-bit version of one of these Ubuntu versions:

  • Ubuntu Lunar 23.04
  • Ubuntu Kinetic 22.10
  • Ubuntu Jammy 22.04 (Long-Term Support)
  • Ubuntu Focal 20.04 (Long-Term Support)

Docker Engine for Ubuntu works on various types of computers, including those with x86_64 (or amd64), armhf, arm64, s390x, and ppc64le (ppc64el) architectures.

Docker Installation on Ubuntu 

While the primary Ubuntu 18.04 repo has the Docker setup package, it might not always contain the most recent edition.Therefore, Installing the most recent Docker container from the Docker repository is the suggested course of action.

Setting up the Docker repository

1. First, install the prerequisites required to add an additional repository over HTTPS and update the software package list:

sudo apt install apt-transport

2. Use the next curl command to bring in the repository's GPG encryption key:

curl -fsSL ubuntu/gpg | sudo apt-key add -

3. To your machine, add the following Docker APT repository:

sudo add-apt-repository "deb [arch=amd64]

The following are some prerequisites for various Ubuntu and Docker installations and maintenance:

  • Docker Engine needs the 64-bit version of Ubuntu! 
  • Assistance for KVM virtualization
  •  Desktops like Gnome or KDE
  • Prerequisites for installing a Docker operation on Ubuntu include administrator rights on the user's profile along with access to the interface.

Steps for Installing Docker on Ubuntu:

1. Open the terminal on Ubuntu.

2. Remove any Docker files that are running in the system, using the following command:

$ sudo apt-get remove docker docker-engine docker.io

After entering the above command, you will need to enter the password of the root and press enter.

3. Check if the system is up-to-date using the following command:

$ sudo apt-get update

4. Install Docker using the following command:

$ sudo apt install docker.io

You’ll then get a prompt asking you to choose between y/n - choose y

PRINCE2® Certification Exam Made Easy to Crack

PRINCE2® Foundation & Practitioner CertificationExplore Course
PRINCE2® Certification Exam Made Easy to Crack

5. Install all the dependency packages using the following command:

$ sudo snap install docker

6. Before testing Docker, check the version installed using the following command:

$ docker --version

7. Pull an image from the Docker hub using the following command:

$ sudo docker run hello-world

Here, hello-world is the docker image present on the Docker hub.

8. Check if the docker image has been pulled and is present in your system using the following command:

$ sudo docker images

9. To display all the containers pulled, use the following command:

$ sudo docker ps -a

10. To check for containers in a running state, use the following command:

$ sudo docker ps

You’ve just successfully installed Docker on Ubuntu!

Executing the Docker Command Without Sudo 

Executing Docker after installing it on Ubuntu (Docker Installation on Ubuntu)  instructions with sudo guarantees that they are run either by a user who belongs to the docker group or with elevated security privileges (by using sudo). 

By default, unless the appropriate user is introduced to the docker group, the docker operations cannot be executed without sudo. Follow these instructions to execute the Docker operation without sudo.

First, establish the Docker Group:

Making the docker group is the initial step. Run a particular phrase in a terminal window if it doesn't already exist:

$ sudo groupadd docker

Next, include the user in the Docker group:

The next thing to do must add the individual to the group for Docker; execute the command line below to do so:

$ sudo usermod -aG docker $USER

Third step: restart the computer:

Reboot the computer now to view the modifications you made.

Using the Docker Commands

Mastering docker commands may enhance the framework's capability after installing Docker on Ubuntu. 

  • To activate the logging, utilize the -f parameter.
  • To divide JSON, use Docker by default; to extract individual keys, use JQ.
  • In your Container file, there are quite a few areas where commands may be specified.
  • Posting to the volumes could be more effective while the picture is being built.
  • Docker offers a highly efficient way to create an alias for its own built-in commands. This makes it easier to set up and handle lengthy and enormous orders. These alias values are stored in the directories /.bashrc or and /.bash_aliases.
  • Docker offers further assistance to remove unused code fragments from the installation of the container.
  • Docker always favors reading statements from the container file that have not changed. Therefore, time savings may be realized by arranging what is shown in the container file in a way that ensures the elements that are susceptible to change are shown towards the end of the document and those that are most likely to undergo change are shown at the top.

Working with Docker Images

Docker images are directions contained in a specific file termed a Dockerfile. It has a unique syntax and outlines the actions Docker is going to perform to construct the container you're using.

Because containers are just tiers upon countless modifications, each new function you run in an image of Docker will add another layer to the container.

The highest layer is a light programmable layer. The individual using it may modify an empty layer and commit it with the docker push command.

STEP 1: You may use the following command to retrieve the Docker Pictures in your local Container repo

sudo docker images

STEP 2: You may use the -a switch to see a list of all Docker Photos, which comprises intermediary images.

sudo docker images -a

STEP 3: Typically, just the first twelve letters of an image's ID are shown when listing images for Docker. Utilise the -no-trunc switch to see images from Docker having extended Image Ids.

sudo docker images --no-trunc

STEP 4: The Docker Search Command: The typical syntax for a Docker search command is: 

sudo docker search <image-name>

Running a Docker Container

The Docker run function controls container execution. Launching the Docker container is the initial step in running a container in asynchronous mode.

sudo docker run –it centos /bin/bash 

After that, use Ctrl+p to navigate back to your OS terminal.

Managing a Docker Container

1. Listing Docker Containers

Execute this command to display the containers:

# docker ps [ OPTIONS ]

2. Starting a Docker Container

Start a container with Docker using the command shown below:

# docker run [ OPTIONS ]  IMAGE[:TAG] 

3. Stopping a Docker Container

One, many, or all containers can be stopped at once. Use the following syntax:

docker stop [-t|--time[=10]] CONTAINER [CONTAINER...]

How to Commit Changes in a Docker Container?

One of the fundamental functions when dealing with images for Docker and containers is to commit changes to an existing Docker image. Once you commit to modifications, you effectively make a new picture with a modified underlying image overlay on top.

1. Retrieve a Docker image.

sudo docker pull ubuntu

2. You will find the Ubuntu image if you check the list of accessible images again:

sudo docker images

3. Activate the Container

sudo docker run -it cf0f3ca922e0 bin/bash

The container is told to begin operating in a collaborative state and allow the terminal's typing access with the -it settings. The option opens another container and takes you to a fresh shell window so you can start functioning within.

4. Commit Changes

Finally, publish the modifications by using the code syntax shown below to produce an alternate picture.

sudo docker commit [CONTAINER_ID] [new_image_name]

5. Move out of the modified container when you've finished modifying it

Exit

6. To preserve the adjustments you applied to the existing picture, you must have the CONTAINER ID. Take note of the ID value in the final result. 

Pushing an Image into Private Docker Repositories

The picture must first be ready by having the appropriate aliases and label, if applicable. This may be accomplished by creating a picture or by modifying a previously created image. Then we build the image using syntax. Sometimes we choose to push an existing picture rather than creating one from scratch. 

To tag the picture with the correct name/alias before pushing it into our repository, use the command below:

docker tag SOURCE_IMAGE[:TAG] TARGET_IMAGE[:TAG]

The command below may be used to check the outcome:

docker images

We may push the image of Docker to our personal repo now that it has been constructed. The DockerHub repository must first be logged into using this instruction:

docker login

The picture must be pushed as the last step using the subsequent command line:

docker push [OPTIONS] NAME[:TAG]

Engaging with Docker Volumes

When working with containers, Docker volumes are popular and practical solutions for assuring data durability. They are more effective than building extra editable layers, which enlarge the dimension of the Docker image. 

When starting a container, there are many approaches to attaching a Docker volume. The --mount and -v parameters have been introduced into the docker start command, allowing users to choose amongst them. 

Use this command to build a Docker Volume:

docker volume create [volume_name]

For instance, you would use the following command to create a disc with its title data:

docker volume create data

The inspect function may be utilized to find additional details regarding a Docker volume

docker volume inspect [volume_name]

It provides information about a volume, such as where it resides on the file system that hosts it (Mountpoint). The folder mentioned beneath the mountpoint route also contains everything kept on the data drive.

A Docker volume can be deleted using the syntax that follows on its command line:

docker volume rm [volume_name]

Docker Network Commands

A single server hosting a Docker Engine version houses an interconnected network. Multiple hosts with independent engines can be part of a network with overlays. Docker Network Build will build a link between two networks  for you if you just provide the network's name when you carry out the command.

Overlapping networks can only be created if certain prerequisites already exist, compared to bridge networks. These circumstances are: 

  1. A key-value storage is accessible. Key-value databases supported by the engine.
  2. Servers connected with the key-value storage in the cluster.
  3. On each computer in the swarm, an Engine daemon is correctly set up.

The layered network can be enabled by the following docker options:

  • --cluster-store
  • --cluster-store-opt
  • --cluster-advertise

Engine by default produces a distinct subnetwork for the entire network when you build one. Through the use of the --subnet option, you may modify this default and manually define a subnetwork. You are limited to choosing a single subnet on the bridged network. Several subnets are supported via a network with overlay.

Earn the Most Coveted DevOps Certification!

DevOps Engineer Masters ProgramExplore Program
Earn the Most Coveted DevOps Certification!

Become a DevOps Practitioner

Eager to learn the core Docker technologies? Simplilearn’s Docker Certified Associate (DCA) Certification Training Course helps you gain proficiency in Docker Hub, Docker Compose, Docker Swarm, Dockerfile, Docker Containers, and more. If you’re looking forward to beginning a career in DevOps, the Post Graduate Program in DevOps would be a great fit. The DevOps course bridges the gap between software developers and operations. You can gain expertise in the principles of continuous development and deployment, automation of configuration management, inter-team collaboration, and IT service agility, using modern DevOps tools such as Git, Docker, Jenkins, Puppet, and Nagios. DevOps jobs are highly paid and in great demand, so start your journey today.

About the Author

Sayeda Haifa PerveezSayeda Haifa Perveez

Haifa Perveez is passionate about learning new technologies and working on them. She is an engineer who loves to travel, read and write. She's always curious about things and very determined to track the latest technologies and the trends that they are creating for the future.

View More
  • Disclaimer
  • PMP, PMI, PMBOK, CAPM, PgMP, PfMP, ACP, PBA, RMP, SP, and OPM3 are registered marks of the Project Management Institute, Inc.