What is Docker Container: Benefits of Docker Container

The Docker (previously an acronym for dot-docker) platform allows you to package up your application(s) to deliver them to the Cloud without any dependencies. If you are starting to create applications in the Cloud, this can be a great way to create isolated environments and automatically scale them up or down.

Our philosophy is to create systems that are focused on developing a single application. By this, I mean that we want to work with one or perhaps two unique projects and let the rest of our applications move to the Cloud. We tend to break these applications into isolated and encapsulated components and don't share resources. Doing so creates a Dockerfile and an application-based build definition deployed using tools like Docker Compose.

While we typically build a single app on the Docker platform, this doesn't mean you can't have multiple apps, each with its repositories of code. Each app in our application stack doesn't need to be built and deployed exactly. One system that we create ourselves is a microservices architecture. It uses Docker containers (not to be confused with the standard docker images you see) to package up its services and deploy them to Cloud providers such as AWS, Google Cloud, Azure, etc. What we have found is that building and deploying applications using these containers provide many benefits:

  • Caching a cluster of these containers
  • Flexible resource sharing
  • Scalability - many containers can be placed in a single host
  • Running your service on hardware that is much cheaper than standard servers
  • Fast deployment, ease of creating new instances, and faster migrations.
  • Ease of moving and maintaining your applications
  • Better security, less access needed to access the code running inside containers, and fewer software dependencies

Keep these advantages in mind as you create the cloud container infrastructure you need to build applications in the Cloud. Our philosophy is to use Docker containers for many of our applications instead of just building each application and distributing the application separately as you might with Heroku, CloudFront, Google App Engine, etc.

Post Graduate Program in DevOps

Designed in collaboration with Caltech CTMEExplore Course
Post Graduate Program in DevOps

Learning About Docker Containers - Docker-Compose

The Docker-compose is an open-source tool that allows you to easily define and deploy your containers using a build definition. You define a set of project-specific dependencies and define a set of build-time tasks that will build and deploy each container. For example, the following two projects represent the services that our application requires to work:

(Note: these two containers are deployed using Docker Compose. However, before you deploy this command, you'll need to make sure that you've installed Docker on your server.)

$ docker-compose up running: container_1 created /tmp/app /tmp/app/container_1-cc.docker.io/app/container_1-cc.docker.io container_2 created /tmp/app/container_2-cc.docker.io/app/container_2-cc.docker.io Container up running: container_1 created /tmp/app/container_1-cc.docker.io/app/container_1-cc.docker.io container_2 created /tmp/app/container_2-cc.docker.io/app/container_2-cc.docker.io Created new images/app_1.service Compiled "app/service/service_1.class" Started /tmp/app/service/service_1.service Running service "app/service/service_1.service" Starting container "app/service/service_1.service" Running Service "/tmp/service/service_1.service" /tmp/app/service/service_1.service

WARNING: There are two instances of the 'app/service/service_1.service' app installed on the given host. Attempting to start service 'app/service/service_1.service' on a different host could lead to possible boot loops or be particularly inefficient. Please be careful only to update one instance of this app at a time. Running service 'app/service/service_1.service' on a different host could lead to possible boot loops or be particularly inefficient. Please be careful only to update one instance of this app at a time.

In the above configuration, we can see two containers that are running on the same host. To deploy our application, we'll use the following two commands. These two commands will deploy the application to the Cloud provider of our choice.

$ docker-compose up -d COMPOSE_COMMIT: Build "app_1" Exec "rm -rf app_1" -d $ docker-compose up -d COMPOSE_COMMIT: Build "app_2" Exec "rm -rf app_2" -d

As you can see from the above output, our app was built as quickly as a single command. Now we can upload this image to the Cloud provider of our choice and deploy the application!

The following commands will deploy our application:

$ docker-compose pull app_1 $ docker-compose push app_1

Ten Steps to Using Docker Container

Step 1: Moving the Container Infrastructure to the Cloud

After deploying our application, we can see that it is deployed to our cloud provider of choice. You can see here that we're running on our cloud provider.

This image is deployed using a Docker Swarm Hub. As we mentioned in our previous article, Swarm is a fully managed docker service. However, we wanted to keep some of the complexity of running a Swarm Hub and integrating with a HashiCorp Vault. Instead, we decided to deploy to a Kubernetes cluster. Kubernetes is a container orchestration solution that allows you to deploy and operate container clusters that deploy containers of various kinds. We can see in the above image that we deployed to a container cluster managed by Kubernetes.

Step 2: Monitoring Our Environment

To monitor our environment, we can use an existing provider like Azure Information Protection (API), IBM DataSmart, or Splunk. For example, to create a machine event for our environment, we can use the following API requests. These API requests allow us to monitor our environment, send a report, and delete or update systems.

Note that this is only a small subset of the available options in the specific provider that we choose to use. You can learn more about these providers in the respective providers' documentation.

Step 3: Configuring the System

Now that we've deployed our application, we need to create a few necessary components to run our system.

The Dockerfile is responsible for configuring our production environment. A Dockerfile is a bash script we use to create our system. This script needs to provide configuration for the various components and to deploy our application.

Step 4: Adding Vault

The Vault is responsible for protecting, securing, and access to our source code. It stores secrets, credentials, and secret keys and is usually used for a wide range of environments.

Using Vault, we can configure a specific user or group to access the various source code repositories and databases.

To add Vault to the project, we need to edit the project's source code and add the source files in the right place. These files are located in the /vault directory. We also have to include the dependencies of Vault.

To add the necessary Vault source files to the project, we can do the following:

$ git add vault/cache.yml vault/source.json vault/rpc.yml $ git add vault/provision.yml $ git add vault/user.yml $ git commit -m "Add Vault Source Files."

We now have our code in the right place using this method and can build our container images.

Step 5: Run Application

To run the application, we can run the Dockerfile from our root directory.

$ docker-compose run app

This command will run the Dockerfile from the /vault directory.

Free Course: Introduction to DevOps Tools

Master the Fundamentals of DevOps ToolsEnroll Now
Free Course: Introduction to DevOps Tools

Step 6: Start Machine

The application will now start and provide us with a primary and straightforward API.

Step 7: Deploy to Machine

To deploy the application to our machine, we can use the following command.

$ docker-compose up -d

We can see that we have the instance that is running our application.

Step 8: Restarting Machine

To restart our instance, we can use the same command:

$ docker-compose down -d

And we can see that our instance is now running.

Step 9: View State

Now that the application is running, we can do the following:

$ docker-compose ps

This command shows us the machine states of the application.

We can see that our machine is now running our application, and we can see its state.

Step 10: Inspect /vault/code

To inspect the contents of our source code, we can use the following command:

$ docker-compose exec /vault/code

This command will output the content of our /vault/code directory.

Interested to begin a career in DevOps? Enroll now for the DevOps Certification Course. Click to check out the course curriculum.

Conclusion

Now that you know how to run and deploy a containerized application using Docker, it is time to look at an example of a fully automated containerized application in a future article. In the meantime, to advance your understanding and practical experience of DevOps tools, consider the Post Graduate Program in DevOps. This is a comprehensive certification program offered in partnership with Caltech CTME.

The Docker (previously an acronym for dot-docker) platform allows you to package up your application(s) to deliver them to the Cloud without any dependencies. If you are starting to create applications in the Cloud, this can be a great way to create isolated environments and automatically scale them up or down.

Our philosophy is to create systems that are focused on developing a single application. By this, I mean that we want to work with one or perhaps two unique projects and let the rest of our applications move to the Cloud. We tend to break these applications into isolated and encapsulated components and don't share resources. Doing so creates a Dockerfile and an application-based build definition deployed using tools like Docker Compose.

While we typically build a single app on the Docker platform, this doesn't mean you can't have multiple apps, each with its repositories of code. Each app in our application stack doesn't need to be built and deployed exactly. One system that we create ourselves is a microservices architecture. It uses Docker containers (not to be confused with the standard docker images you see) to package up its services and deploy them to Cloud providers such as AWS, Google Cloud, Azure, etc. What we have found is that building and deploying applications using these containers provide many benefits:

  • Caching a cluster of these containers
  • Flexible resource sharing
  • Scalability - many containers can be placed in a single host
  • Running your service on hardware that is much cheaper than standard servers
  • Fast deployment, ease of creating new instances, and faster migrations.
  • Ease of moving and maintaining your applications
  • Better security, less access needed to access the code running inside containers, and fewer software dependencies

Keep these advantages in mind as you create the cloud container infrastructure you need to build applications in the Cloud. Our philosophy is to use Docker containers for many of our applications instead of just building each application and distributing the application separately as you might with Heroku, CloudFront, Google App Engine, etc.

Post Graduate Program in DevOps

Designed in collaboration with Caltech CTMEExplore Course
Post Graduate Program in DevOps

Learning About Docker Containers - Docker-Compose

The Docker-compose is an open-source tool that allows you to easily define and deploy your containers using a build definition. You define a set of project-specific dependencies and define a set of build-time tasks that will build and deploy each container. For example, the following two projects represent the services that our application requires to work:

(Note: these two containers are deployed using Docker Compose. However, before you deploy this command, you'll need to make sure that you've installed Docker on your server.)

$ docker-compose up running: container_1 created /tmp/app /tmp/app/container_1-cc.docker.io/app/container_1-cc.docker.io container_2 created /tmp/app/container_2-cc.docker.io/app/container_2-cc.docker.io Container up running: container_1 created /tmp/app/container_1-cc.docker.io/app/container_1-cc.docker.io container_2 created /tmp/app/container_2-cc.docker.io/app/container_2-cc.docker.io Created new images/app_1.service Compiled "app/service/service_1.class" Started /tmp/app/service/service_1.service Running service "app/service/service_1.service" Starting container "app/service/service_1.service" Running Service "/tmp/service/service_1.service" /tmp/app/service/service_1.service

WARNING: There are two instances of the 'app/service/service_1.service' app installed on the given host. Attempting to start service 'app/service/service_1.service' on a different host could lead to possible boot loops or be particularly inefficient. Please be careful only to update one instance of this app at a time. Running service 'app/service/service_1.service' on a different host could lead to possible boot loops or be particularly inefficient. Please be careful only to update one instance of this app at a time.

In the above configuration, we can see two containers that are running on the same host. To deploy our application, we'll use the following two commands. These two commands will deploy the application to the Cloud provider of our choice.

$ docker-compose up -d COMPOSE_COMMIT: Build "app_1" Exec "rm -rf app_1" -d $ docker-compose up -d COMPOSE_COMMIT: Build "app_2" Exec "rm -rf app_2" -d

As you can see from the above output, our app was built as quickly as a single command. Now we can upload this image to the Cloud provider of our choice and deploy the application!

The following commands will deploy our application:

$ docker-compose pull app_1 $ docker-compose push app_1

Ten Steps to Using Docker Container

Step 1: Moving the Container Infrastructure to the Cloud

After deploying our application, we can see that it is deployed to our cloud provider of choice. You can see here that we're running on our cloud provider.

This image is deployed using a Docker Swarm Hub. As we mentioned in our previous article, Swarm is a fully managed docker service. However, we wanted to keep some of the complexity of running a Swarm Hub and integrating with a HashiCorp Vault. Instead, we decided to deploy to a Kubernetes cluster. Kubernetes is a container orchestration solution that allows you to deploy and operate container clusters that deploy containers of various kinds. We can see in the above image that we deployed to a container cluster managed by Kubernetes.

Step 2: Monitoring Our Environment

To monitor our environment, we can use an existing provider like Azure Information Protection (API), IBM DataSmart, or Splunk. For example, to create a machine event for our environment, we can use the following API requests. These API requests allow us to monitor our environment, send a report, and delete or update systems.

Note that this is only a small subset of the available options in the specific provider that we choose to use. You can learn more about these providers in the respective providers' documentation.

Step 3: Configuring the System

Now that we've deployed our application, we need to create a few necessary components to run our system.

The Dockerfile is responsible for configuring our production environment. A Dockerfile is a bash script we use to create our system. This script needs to provide configuration for the various components and to deploy our application.

Step 4: Adding Vault

The Vault is responsible for protecting, securing, and access to our source code. It stores secrets, credentials, and secret keys and is usually used for a wide range of environments.

Using Vault, we can configure a specific user or group to access the various source code repositories and databases.

To add Vault to the project, we need to edit the project's source code and add the source files in the right place. These files are located in the /vault directory. We also have to include the dependencies of Vault.

To add the necessary Vault source files to the project, we can do the following:

$ git add vault/cache.yml vault/source.json vault/rpc.yml $ git add vault/provision.yml $ git add vault/user.yml $ git commit -m "Add Vault Source Files."

We now have our code in the right place using this method and can build our container images.

Step 5: Run Application

To run the application, we can run the Dockerfile from our root directory.

$ docker-compose run app

This command will run the Dockerfile from the /vault directory.

Free Course: Introduction to DevOps Tools

Master the Fundamentals of DevOps ToolsEnroll Now
Free Course: Introduction to DevOps Tools

Step 6: Start Machine

The application will now start and provide us with a primary and straightforward API.

Step 7: Deploy to Machine

To deploy the application to our machine, we can use the following command.

$ docker-compose up -d

We can see that we have the instance that is running our application.

Step 8: Restarting Machine

To restart our instance, we can use the same command:

$ docker-compose down -d

And we can see that our instance is now running.

Step 9: View State

Now that the application is running, we can do the following:

$ docker-compose ps

This command shows us the machine states of the application.

We can see that our machine is now running our application, and we can see its state.

Step 10: Inspect /vault/code

To inspect the contents of our source code, we can use the following command:

$ docker-compose exec /vault/code

This command will output the content of our /vault/code directory.

Interested to begin a career in DevOps? Enroll now for the DevOps Certification Course. Click to check out the course curriculum.

Conclusion

Now that you know how to run and deploy a containerized application using Docker, it is time to look at an example of a fully automated containerized application in a future article. In the meantime, to advance your understanding and practical experience of DevOps tools, consider the Post Graduate Program in DevOps. This is a comprehensive certification program offered in partnership with Caltech CTME.

About the Author

Shivam AroraShivam Arora

Shivam Arora is a Senior Product Manager at Simplilearn. Passionate about driving product growth, Shivam has managed key AI and IOT based products across different business functions. He has 6+ years of product experience with a Masters in Marketing and Business Analytics.

View More
  • Disclaimer
  • PMP, PMI, PMBOK, CAPM, PgMP, PfMP, ACP, PBA, RMP, SP, and OPM3 are registered marks of the Project Management Institute, Inc.