Docker is a mechanism that enables using the tools to create, build, run, test, and deploy distributed applications in the Linux environment. We can consider Elastic Container Service (ECS) as a cloud computing service from Amazon Web Services (AWS) that incorporates containers and lets developers execute applications in the cloud and allow their code to run with no configuration to an environment. Using the AWS account, developers can deploy with the scalability of running it on different servers called clusters with the help of API calls and its inbuilt task definitions.
Amazon ECS uses Docker images within the task definitions to launch containers as a role of tasks in the clusters.
The Docker Compose CLI allows developers to use native Docker commands to execute applications within the Amazon EC2 Container Service (ECS) as per the developing cloud-native applications.
Below are the mechanisms of the integration between Amazon ECS and Docker that permit developers to use the Docker Compose CLI to:
- Configure an AWS context in one Docker command, authorizing you to change from a local environment context to a cloud and execute applications fast and efficiently.
- Streamline multi-container application development on Amazon ECS with the help of Compose files.
For deploying the docker container on ECS, we need to follow the below prerequisites:
- Download and install the most delinquent version of Docker Desktop.
- Download for Windows
- Download for Mac
- Install the Docker Compose CLI for Linux.
- Please confirm your AWS account.
Docker provides the major benefits to developers to deploy docker containers seamlessly with the help of Compose file using the docker-compose up command. It also runs on multi container applications locally.
The following points include deploying instructions with a Compose application on Amazon ECS.
Run an Application on ECS
AWS operates an acceptable gained permission standard, with a distinctive role for per resource type and process.
To confirm that Docker ECS integration is authorized to handle resources for your Compose application, we should have the AWS credentials grant access to the below subsequent AWS IAM authorizations:
GPU support, which defines on EC2 instances to execute containers with configured GPU devices, provides the below permissions types:
Create AWS Context
- For executing the docker context, we need to define the ecs myecscontext command to build an Amazon ECS.
Docker context named myecscontext.
- If there are already AWS CLI configurations and installation is done, the setup command will permit you to select the existing AWS profile to connect to Amazon.
- We can also build the latest profile by giving an AWS access key ID and a secret access key.
- Eventually, you can configure and set up your ECS context to get the AWS credentials using the AWS_* environment variables, a standard way to incorporate with third-party instruments and single-sign-on providers.
Below is the screenshot for defining the environment variables:
After the creation of AWS context, we can get the list of all Docker contexts by executing the docker context ls command:
Run a Compose Application
To deploy and operate multi-container applications described in Compose files to Amazon ECS using the docker-compose command as per the below points, please confirm if you are using the ECS context.
- We can establish the --context myecscontext flag with the command.
- We can also specify the current context operating the command docker context using myecscontext.
- Execute docker-compose up and docker-compose down to create and then stop a complete Compose application.
- Docker compose-up uses the compose.yaml in the current folder by default. You can define the working directory with the --workdir flag or establish the Compose file instantly using docker-compose --file with mycomposefile.yaml up.
- We can also determine a name for the Compose application using the --project-name flag during deployment. If no name is defined, a name can be extracted from the working directory.
- Docker ECS integration transforms the Compose application standard into a collection of AWS resources, defined as a CloudFormation template.
- You can examine the developed template using the docker-compose convert command and track CloudFormation involving this model using the AWS web console after running the docker using compose-up. We can get the CloudFormation events displayed in your terminal.
- We can consider services developed for the Compose application on Amazon ECS and see the state with the docker-compose ps command.
- We can get the logs from containers that are elements of the Compose application with the docker-compose log command.
- We can use docker-compose up on the changed Compose project to update the particular application in the production environment with no hurdles.
- After running the docker-compose command with the updated Compose file, the ECS service can generate the rolling update configuration. The stack will be edited to reflect modifications, and we can replace it if there's a certain requirement for some services.
- This replacement technique follows the rolling-update format set by your services deploy.update_config configuration.
- AWS ECS operates a percent-based model to determine the number of containers to execute or shut down during a rolling update.
- The Docker Compose CLI calculates rolling update composition as per the equality and replications fields.
- We can also instantly configure a rolling update with the extension fields x-aws-min_percent and x-aws-max_percent as per specific requirements.
- The former specifies the minimum percent of containers to handle for service and specifies the highest percent of different containers to initialize before previous versions are released.
- It has the ability to execute twice the number of containers for a service (200%) and can shut down 100 % containers during the operations.
View Application Logs
The Docker Compose CLI defines AWS CloudWatch Logs benefit for your containers. By default, developers can access and see logs of compose applications and can also check logs of local deployments:
As per the below example:
#fetch logs for application in current working directory
$docker compose logs
#specify compose project name
$docker compose --project-name PROJECT logs
#specify compose file
$docker compose --file /path/to/docker-compose.yaml logs
A log group is built for the application as docker-compose/<application_name>, and log streams are maintained for all services and containers in the application as <application_name>/<service_name>/<container_ID>.
We can optimize the AWS CloudWatch Logs with the help of the extension field x-aws-logs_retention in the Compose file to specify the number of holding days for log events. This default behavior helps to keep logs permanently.
Private Docker Images
The Docker Compose CLI spontaneously sets up authorization. Which enables us to easily pull private images from the Amazon ECR registry on the same AWS account. For pulling the private images from another registry.
- It contains creating a Username + Password (or a Username + Token) secret on the AWS Secrets Manager service.
- The Docker Compose CLI delivers the docker secret command. According to that, we can operate secrets that are created on AWS SMS without installing the AWS CLI.
- Create a token.json file to define your DockerHub username and access token.
Here, we can create a secret with the help of a file using docker secret: as shown in the below image:
After creating secret, we can use this ARN in our Compose file using x-aws-pull_credentials with a custom extension on the Docker image URI as per the below syntax.
By default, service-to-service communication can be executed transparently. We can deploy your Compose applications with numerous interconnected services without modifying the compose file between local and ECS deployment. Respective services can run with different constraints (memory, CPU) and recurrence rules.
Services are recorded instantly with the Docker Compose CLI on AWS Cloud Map during the application deployment. They are reported as completely suitable domain names of the form: <service>.<compose_project_name>.local.
Services can get their dependencies with the Compose service names that deploy locally with the docker-compose.
Dependent Service Startup Time and DNS Resolution
Services can simultaneously schedule on ECS when a particular Compose file is deployed.
AWS Cloud Map instructs an initial wait for DNS service to be capable of fixing your services domain expressions. Your application code must support this uncertainty by waiting for dependent services to be prepared, or by adding a wait script as the access point to your Docker image. This wait for dependent services in your Compose application also lives when you deploy it locally using docker-compose.
We can also utilize the depends_on component of the Compose file format. With the help of this, a dependent service will be produced first, and application deployment waits for it to be up and operating before initiating the creation of the dependent services.
Service isolation is executed by the Security Groups rules, which authorize services transmitting a common Compose file “network” to communicate concurrently with their respective Compose service names.
Tuning the CloudFormation Template
The Docker Compose CLI depends on Amazon CloudFormation to handle the application deployment. To get more additional control on the assembled resources, we can go with the approach of docker-compose, which transforms to develop a CloudFormation stack file from your Compose file. This permits you to review the resources it represents or customize the template as per the requirement and then use the template to AWS with the AWS CLI or the AWS web console.
After specifying the modifications required to your CloudFormation template, you can contain overlays in your Compose file that will be automatically involved in compose-up.
An overlay is a yaml object that uses the exact CloudFormation template data structure as the one developed by ECS integration but only includes attributes to be modified or added. It will be combined with the rendered template before being involved in the AWS infrastructure.
Advance your career as a MEAN stack developer with the Full Stack Web Developer - MEAN Stack Master's Program. Enroll now!
We hope this article helped you understand ECS Docker. In this article, we discussed the concept of various types of processes of Amazon ECS and their different types, along with examples that will be helpful to professional developers from Java and .net backgrounds, application architectures, cloud specialists, and other learners looking for information on ECS Docker Hub with containers.
Besides pursuing varied courses provided by Simplilearn, you can also sign up on our SkillUp platform. This platform, a Simplilearn initiative, offers numerous free online courses to help with the basics of multiple programming languages, including cloud docker hub. You can also opt for our Full-Stack Web Development Certification Course to improve your career prospects.