Containers are changing how development teams at organizations in industries from education to financial services, construction to manufacturing use to test and deploy applications. Containers, by design, isolate applications from one another, enabling these teams to separate high-risk issues from the rest of the environment, making them far less likely to impact other applications running in the enterprise.

Containers can be deployed from one server, allowing development teams to save time, money, and effort when deploying and testing applications.

Post Graduate Program in DevOps

Designed in collaboration with Caltech CTMEExplore Course
Post Graduate Program in DevOps

How Do I Use Docker for DevOps?

Containers can be deployed from one server, allowing development teams to save time, money, and effort when deploying and testing applications. The primary reason to use containers instead of VMs for development is that they help developers use serverless applications, automating and speeding up application deployment.

With this, companies can shrink their VMs, driving down the cost and improving the speed at which they can test and deploy code. The value of Docker for DevOps continues as it enables an entirely isolated application to be deployed to multiple servers. As it spreads to the servers, no other applications can access it. The only exposure of the container is to the internet and the Docker client. In this way, if you have multiple databases, logging applications, web servers, etc., you will never need to be concerned about these issues as your development environment is entirely isolated.

Use containers for packaging up the application and packaging up serverless development.

Working With Containers in Your Development Environment

The way that Docker for DevOps works is by creating and using private containers. It is a program for software developers that is primarily used to develop private images and projects for use within the development and allows developers to create their configuration files, packages, and images quickly and use them to test a private application that is not exposed to any other environments.

The first step to Dockerizing a project is using the Dockerfile to describe what to build. The Dockerfile file is a way for a developer to specify the Dockerfile, the required tools, libraries, and even the commands they need to build a particular application. In addition, there is a crucial directory within the project's working directory that is used to store the images to be run during the development phase. This virtual directory has the primary purpose of providing the basic file system that the container uses to access the system.

Free Course: Getting Started with Docker

Master the Fundamentals of DockerEnroll Now
Free Course: Getting Started with Docker

The next step to Dockerizing a project is to create a Docker build directory where the build will be made. Docker build is an easy way to create a private image and a specific build directory required for the image to be used. After the build is created, we can then run the containers from the container host using the Docker command line.

Let's say that we are developing a web application. We need to create an image of our application that can run on multiple architectures that we have. The first step to building an image of our application is to pull a live image from the web. We do this by running the command below in your terminal on a machine with Docker installed.

$ docker pull archlinux/archlinux:latest

At this point, the container is pulled from the machine. A Dockerfile is required at this point to be placed inside the image. After the image has been built, the container is downloaded and run from the host machine. The command for running the container is simply to run the image from the directory by using the Docker command line:

$ docker run --rm --name user-f26062b:graphql-8:latest -it -p 8000:8080 user-f26062b:graphql-8

The flag is necessary to run the container, and the -p flag specifies the port. After the container is running, it is possible to access the container using the Docker command:

$ docker container ls User-f26062b:graphql-8:latest | grep ‘/:/’| sed ‘s/:/:/g’

We have obtained a list of the containers with the name 'graphql-8:latest'. The 'graphql-8:latest' container is the one that was previously pulled from the web. In addition, we can see that the container had run for 10 minutes and the last command was '/s/:/g,' which indicates that the container is now being terminated.

We can modify this image to load a specific application. In this case, we want to use the rkt container created for use in the Centos environment. It is possible to make the container using the following command:

$ docker build -t rkt:centos7

After a container is built, we can then download it using the command:

$ docker image download rkt:centos7

This completes the creation of the container image for Centos7. We can now check the status of the container by using the command below in your terminal:

$ docker container status User-f26062b:graphql-8:latest -node net:x:00:0:93.17.0/24 - pid:4696 - rhel7 status: Running -----> Finished in 5.24 secs

The container is running and is listening on port 8000, and it contains an OS X Terminal running on port 4000.

Testing this container is easy by simply running the command below in your terminal:

$ docker run --rm --name user-f26062b:graphql-8:latest -i centos7/f26062b:/home/graphql-8:/opt/graphql -p 8080 user-f26062b:graphql-8

Once the container has started, we can access the container using the command below:

$ docker container ls User-f26062b:graphql-8:latest | grep ‘/:/’| sed ‘s/:/:/g’

We can see that the container has run for 10 minutes and the last command was '/s/:/g,' which is the container termination command.

One last step to run the container is to delete it from the Docker network. To do this, run the following command:

$ docker network delete user-f26062b:graphql-8:latest

This completes the information that we need to launch our first application in rkt.

Free Course: Introduction to DevOps Tools

Master the Fundamentals of DevOps ToolsEnroll Now
Free Course: Introduction to DevOps Tools

Using Containers to Publish Your Applications to the Cloud

We have seen that launching a container using rkt is simple, and it is possible to build a multi-user machine that can be scaled out with ease. In addition to this, you can upload applications to the cloud, and the deployment to your CI/CD system can be straightforward.

There are other good features of rkt as well. Let us discuss some of them.

Port-Forwarding

This feature allows us to communicate from a machine that runs on your local machine to a machine on the rkt network. In other words, to run the containers, we need to access port 8080 from our machine. That means that port 8080 needs to be opened before we start the container. To open this port on the host machine, simply run the command below:

$ ssh -c "bind user@host.com:8080" user@host

Port 8080 is now open. Then we can write the container and download the code from the web with the following command:

$ docker run --rm --name user-f26062b:graphql-8:latest -i centos7/f26062b:/home/graphql -p 8080 -v /opt/graphql:/opt/graphql/graphql

The container should start in seconds. To see if the container is running, run the command below:

$ docker ps

The above command should give us a list of all the containers running in the rkt network.

Cloud Management With CRI

The rkt tool is designed to be very useful for managing Docker containers on the cloud. If you want to deploy a container to the cloud, rkt provides an excellent way to do this.

You can easily create, manage, and scale your container clusters with rkt. The rkt tool is flexible enough to do this. Here are some excellent use cases for rkt.

Deploying a containerized application to the cloud

You can use this feature to deploy a containerized application to the cloud.

Configuring the access to the container via rkt-cri

The easiest way to access the container is using the command:

$ rkt-cri connect:rails docker

To see how this is configured, run the command below:

$ rkt-cri ls

When you are done configuring the access to the container, you can see all the containers running in the rkt network by running the command below:

$ rkt-cri shows

To list the containers in the rkt network, run the following command:

$ rkt-cri list

If you want to stop a container, you can run the command below:

$ rkt-cri stop docker

To start a container, you can do this by running the command below:

$ rkt-cri start my-name-of-container

Testing and Deploying the Application in the Cloud

Testing and deploying the application in the cloud is also very easy using the tool. It is recommended to test your application before deploying it. You can test your application using the command below.

$ rkt-cri test

To deploy the application in the cloud, run the following command:

$ rkt-cri deploy -t my-name-of-container

Your application will be deployed to the production cloud.

Extending the Feature Set of rkt

Since we have seen how to add applications to the rkt network, it is easy to extend its features. There are various open-source projects and open source modules for managing Docker containers that are available. We can develop the qualities of rkt using these.

Most of the time, rkt will run containers using a container configuration file called .docker. If you want to extend the features of rkt, you should create a Dockerfile. The .docker file should have the following contents:

FROM ubuntu:12.04 COPY . /opt/graphql ENV MAINTAINER rkt WORKDIR /opt/graphql RUN apt-get update RUN apt-get install -y docker RUN mkdir -p /opt/graphql RUN chown root:root /opt/graphql/ /opt/graphql/run/dockerd /opt/graphql RUN docker build -t my-name-of-container \ -t user-name-of-container:my-name-of-container \ -t user-name-of-container:my-name-of-container:my-name-of-container \ --scm unix:i386 -t v1_10_11:docker

The contents of the .docker file are taken from this GitHub repository.

Now that the Dockerfile is ready, you can run the command below to start the rkt network with the Docker engine.

$ rkt-cri start -t user-name-of-container

It will start the network in seconds.

DevOps Certification Training Course

Gain expertise in the top DevOps toolsExplore Course
DevOps Certification Training Course

Doing a Live Migration

You can also do a live migration to another data center by running the following command.

$ rkt-cri migrate -t user-name-of-container

It will automatically set the port from the old data center to the new data center. You can do a live migration only if the container files are saved in /opt/graphql/run/dockerd. If this directory does not exist, you will see the following error:

Error running docker: /opt/graphql/run/dockerd: no such file or directory: No such file or directory: /opt/graphql/run/dockerd

So, we need to do a live migration to create a new container and have it start in the new data center. The following command creates a new container and installs the nginx application in it.

$ rkt-cri start -t user-name-of-container -t user-name-of-container:my-name-of-container --name nginx \ --image amd64:nginx \ --log-level debug \ --no-start-queued \ --image-nic image-nic:latest \ --auto-eol \ --stamp-mode utf8 \ --label "nodejs-example" \ --image-nic image-nic:latest --publication-url "https://" \ --no-start-queued \ --name nginx

This command will automatically start the container in the new data center. You can do a live migration only if the container files are saved in /opt/graphql/run/dockerd . If this directory does not exist, you will see the following error:

Error running docker: /opt/graphql/run/dockerd: no such file or directory: /opt/graphql/run/dockerd: No such file or directory: /opt/graphql/run/dockerd:

To overcome this issue, we can create a new container and start it in the old data center by running the following command.

$ rkt-cri create -t user-name-of-container -t user-name-of-container:my-name-of-container \ --image container-image:nginx \ --log-level debug \ --no-start-queued \ --image-nic image-nic:latest \ --log-level debug \ --no-start-queued \ --image-nic image-nic:latest --publication-url "https://" \ --no-start-queued \ --name nginx

It will automatically create the container in the old data center. It will start the container in seconds. It will also make sure that the container is a Docker image.

This is how to start, stop, and schedule an Rkt network in a system group.

You can download the Docker image here. You can also clone this repository and run the commands above.

Questions and Answers

1. What container image should I use for my GraphQL application?

The only container image that has been tested for a GraphQL application is the Docker image, which has the following requirements:

  • 100% CPU usage is forbidden.
  • There is a persistent cache of all data processed by the application.
  • Uses the latest version of Node.js and Docker.

2. Can I run a GraphQL server in a network without any containers?

No. A cluster is needed to perform GraphQL server processing. You need a cluster of nodes to run GraphQL processing in a distributed way. To run a GraphQL application cluster in a single data center, we need to run several network containers in the same data center. In the future, we will cover the details of this process in detail.

3. Can I stop, restart and start the container?

Yes, you can stop, start and restart the container. However, we can not schedule the container for maintenance.

4. Can I stop, restart and start the container when I need it?

Yes, you can stop, restart and start the container anytime you want.

5. How should I configure systemd to run the containers?

To configure systemd to run the containers, we need to open an account on the systemd GitHub page and add a user account with the permission to run containers. If you already have a user account with permission to run containers, you can easily open a new account and add the user to run containers. If you do not have an account, you can create a new account and add the user to run containers. After adding the user, you can create systemd unit files to run the container and add the user to run containers. The following example shows how you can create systemd unit files for a Docker container:

$ cat >docker/service.service Unit name: docker Start time: 2017-01-20 10:00:00 ID: work-a1bdb5 Purpose: unit Arguments: - name: docker Status: Running

You can get the default units from the systemd documentation.

6. How should I run a Cluster?

To run a cluster of containers, you need to create a systemd unit file that executes the systemd unit command to run the containers as a group. The systemd unit command is as follows:

systemctl exec -it -P docker/service.service

Once the systemd unit file is created, it can be executed using:

$ systemctl exec -it -P docker/service.service service docker/service: init started

Interested to begin a career in DevOps? Enroll now for the DevOps Certification Course. Click to check out the course curriculum.

Conclusion

The Docker container is not a library or a technology. It is a complete solution for building and deploying distributed applications. With the Docker container, we can deploy web applications and perform system-level services like communicating with databases, making TCP handshakes, and performing HTTP requests.

To become an expert in using DevOps tools like Docker and applying containerization to continuous integration/continuous deployment, you can study DevOps skills with Simplilearn.  Our certification training courses include Docker Certified Associate (DCA) and Certified Kubernetes Administrator (CKA).  We also offer comprehensive integrated programs for DevOps professionals, such as the Post Graduate Program in DevOps in collaboration with Caltech CTME.

About the Author

SimplilearnSimplilearn

Simplilearn is one of the world’s leading providers of online training for Digital Marketing, Cloud Computing, Project Management, Data Science, IT, Software Development, and many other emerging technologies.

View More
  • Disclaimer
  • PMP, PMI, PMBOK, CAPM, PgMP, PfMP, ACP, PBA, RMP, SP, and OPM3 are registered marks of the Project Management Institute, Inc.