Docker creates virtual environments that isolate a TensorFlow installation using containers from the rest of the system. TensorFlow programs run in a virtual environment that shares resources with the host machine.

Docker allows us to effortlessly replicate the working environment used to train and run the machine learning model from any location. It allows us to package our code and dependencies into containers that can be transferred to different hosts, regardless of hardware or operating system. Basically, it is free, open-source software that makes it simple, secure, and consistent to execute several web apps on the same server.

Post Graduate Program: Full Stack Web Development

in Collaboration with Caltech CTMEEnroll Now
Post Graduate Program: Full Stack Web Development

TensorFlow Docker Requirements

  • Docker Should Be Installed on Your Local Host Machine.

Docker is a free and open platform for building, delivering, and operating apps. Docker allows us to decouple our apps from our infrastructure, allowing us to swiftly release software. We can manage our infrastructure the same way we control our applications with Docker. We may also drastically minimize the time between writing code and executing it in production by leveraging Docker's approaches for shipping, testing, and deploying code quickly.

  • Install NVIDIA Docker Support for GPU Support on Linux.

Users can utilize the NVIDIA Container Toolkit to create and execute GPU-accelerated Docker containers. A container runtime library and utilities are included in the toolkit, as well as facilities for automatically configuring containers to use NVIDIA GPUs.

Make sure our Linux distribution has the NVIDIA driver and Docker engine installed to get started. The CUDA Toolkit does not need to be installed on the host system, but the NVIDIA driver must be installed.

Install Docker

Docker Desktop makes it simple to create, distribute, and operate containers on Mac and Windows, just like it does on Linux. Docker takes care of the complicated setup, allowing us to concentrate on writing code. Docker Engine is available as a static binary installation on a range of Linux platforms, as well as macOS and Windows 10 through Docker Desktop.

  • Docker Desktop on Mac

System Requirements:

Mac must meet the following requirements to install Docker Desktop successfully.

  • Mac with Intel chip
  • Mac with Apple silicon
  • Mac with Intel Chip

Docker Desktop is compatible with the latest versions of macOS, the current macOS release, and the two preceding releases. As new major versions of macOS become widely accessible, Docker ceases to support the older version and instead supports the most recent version (in addition to the previous two releases). Docker Desktop presently supports macOS Catalina, macOS Big Sur, and macOS Monterey.

  • At least 4 GB of RAM.
  • VirtualBox prior to version 4.3.30 must not be installed as it is not compatible with Docker Desktop.

  • Mac With Apple Silicon

The GA release of Docker Desktop for Mac on Apple silicon is now available. This allows us to develop applications in our preferred local development environment and extends ARM-based application development.

Docker Desktop for Apple silicon also supports multi-platform images, allowing us to create and execute images for both x86 and ARM architectures without the need for a complicated cross-compilation development environment.

Free Course: Getting Started with Docker

Master the Fundamentals of DockerEnroll Now
Free Course: Getting Started with Docker

Install and Run Docker Desktop on Mac

1. Double-click Docker.dmg to open the installer, then drag the Docker icon to the Applications folder.

Tensorflow_docker_1

2. Double-click Docker.app in the Applications folder to start Docker.

Tensorflow_docker_2.

3. The Docker menu displays the Docker Subscription Service Agreement window (whale menu). It incorporates an update to Docker Desktop's terms of service; read it carefully.

4. To continue, check the box to indicate that you accept the amended conditions, then click Accept. After you accept the terms, Docker Desktop will start.

*Important- If we do not agree to the terms, Docker Desktop will close, and we will no longer be able to use it on our computer. By launching Docker Desktop later, we can accept the terms.

After installation, Docker Desktop launches the Quick Start Guide. The tutorial includes a simple exercise to build an example Docker image, run it as a container, push and save it to Docker Hub.

Tensorflow_docker_3.

  • Docker for Windows, Windows 10 Pro, or later

System Requirements:

Windows machines must meet the following requirements to successfully install Docker Desktop.

  • WSL 2 backend
  • Hyper-V backend and Windows containers

  • WSL 2 Backend

  • 64-bit processor with Second Level Address Translation
  • 4GB system RAM
  • BIOS-level hardware virtualization support must be enabled in the BIOS settings.

  • Hyper-V backend and Windows containers

Docker Desktop creates containers and images that are shared among all user accounts on the machines where it is installed, as all Windows accounts use the same virtual machine to construct and operate containers.

Installation:

  1. Download Docker.
  2. Double-click InstallDocker.msi to run the installer.
  3. Follow the Install Wizard: accept the license, authorize the installer, and proceed with the install.
  4. Click Finish to launch Docker.
  5. Docker starts automatically.
  6. Docker loads a “Welcome” window giving you tips and access to the Docker documentation.

Tensorflow_docker_4.

Full Stack Web Developer Course

To become an expert in MEAN StackView Course
Full Stack Web Developer Course

Verification:

The whale in the status bar indicates a running (and accessible via terminal) Docker instance.

Open PowerShell or your favorite Windows terminal (e.g., Command prompt) and enter docker run hello-world.

Windows prompts us for access every time Docker starts, allowing Docker to manage the Hyper-V VM’s. The first time Docker starts, we may need to provide the token from the Beta invitation email. Select About Docker from the notification area and verify if we have the latest version when initialization completes.

From PowerShell (or your favorite Windows terminal), check the versions of docker, docker-compose, and verify your installation:

PS C:\Users\username> docker --version

PS C:\Users\username> docker-compose --version

PS C:\Users\username> docker-machine --version

Before we stop, let’s test a Dockerized web server.

NGINX is a well-known lightweight web application for creating server-side applications. It's an open-source web server that runs on a wide range of operating systems. Docker has assured that nginx is supported because it is a popular web server for development.

The various procedures for getting the Docker container for nginx up and running are:

  • The first step is to get the Docker Hub image. When we log into Docker Hub, we may search for and view the nginx image, as shown below. Simply type nginx into the search bar and click the nginx (official) link that appears in the results.

Tensorflow_docker_5.

  • The Docker pull command for nginx may be found in the repository's information on Docker Hub.

Tensorflow_docker_6

  • To obtain the most recent nginx image from Docker Hub, run the Docker pull command as shown above on the Docker Host.

Tensorflow_docker_7

  • Now, use the following command to start the nginx container.

sudo docker run –p 8080:80 –d nginx

We're going to expose the port 80 on the nginx server to port 8080 on the Docker Host.

Tensorflow_docker_8

If we go to the URL http://dockerhost:8080 after running the command, we will see the following output. This indicates that the nginx container is operational.

Tensorflow_docker_9.

New Course: Full Stack Development for Beginners

Learn Git Command, Angular, NodeJS, Maven & MoreEnroll Now
New Course: Full Stack Development for Beginners

Serving With Docker

Tensorflow Serving is a Google API for production machine learning systems that Google and other large tech organizations widely use. It is simple to deploy our model with the same server architecture and APIs. Though it is best with a TensorFlow model, it could be modified to work with other models as well.

Tensorflow_docker_10

The graphics above depict a high-level view of the entire process, from building the model through serving it to an endpoint using Tensorflow Serving. The ideal solution for serving most types of models may be to operate a centralized model on a server from which any device, whether desktop, mobile, or embedded, can request. The server would then perform the inference for us and return the results. We can render that prediction to any device using that prediction. If numerous clients access your endpoint, which is centralized on a server, this design has a major advantage.

Let's begin by downloading the most recent Tensorflow Serving image.

docker pull tensorflow/serving

We’re running the Serving image with our model published on the REST API endpoint.

docker run -p 8501:8501 --mount type=bind,source=/path/to/the/unzipped/model/tmp/,target=/models/fashion_mnist -e MODEL_NAME=fashion_mnist -t tensorflow/serving

Query the model using predict API.

curl -d ‘{"signature_name": "serving_default", "instances": [[[[0.0], [0.0]…………….[0.0]]]]}’ -X POST http://localhost:8501/v1/models/fashion_mnist:predict

Serving With Docker Using Your GPU

Nvidia-docker is a wrapper for the docker command that transparently provisions a container with the components needed to run programs on the GPU. It's only required if we are using nvidia-docker run to run a container that needs GPUs.

Before serving with a GPU, we need two things:

  1. Up-to-date NVIDIA drivers for your system
  2. nvidia-docker

Let us try and understand this through an example:

Pull the latest TensorFlow Serving GPU docker image by running the following command:

docker pull tensorflow/serving:latest-gpu

We'll utilize the Half Plus Two toy model, which generates 0.5 * x + 2 for the x values we offer for prediction. Ops will be linked to the GPU device, and the model will not run on the CPU.

mkdir -p /tmp/tfserving

cd /tmp/tfserving

git clone https://github.com/tensorflow/serving

Run the TensorFlow Serving container pointing it to this model and opening the REST API port.

docker run --gpus all -p 8501:8501 \

--mount type=bind,\

source=/tmp/tfserving/serving/tensorflow_serving/servables/tensorflow/testdata/saved_model_half_plus_two_gpu,\

target=/models/half_plus_two \

  -e MODEL_NAME=half_plus_two -t tensorflow/serving:latest-gpu &

This will start the Docker container, launch the TensorFlow Serving Model Server, bind the REST API port 8501, and map our desired model from our host to the container's models.

2018-07-27 00:07:20.773693: I tensorflow_serving/model_servers/main.cc:333]

Exporting HTTP/REST API at:localhost:8501 ...

Advance your career as a MEAN stack developer with the Full Stack Web Developer - MEAN Stack Master's Program. Enroll now!

Conclusion

Docker allows us to segment an application so we can refresh, clean up, and repair it without having to shut it down completely. Additionally, Docker allows us to create an architecture for apps consisting of tiny processes that communicate via APIs.

If you are interested in learning more about React JS and other related concepts, you can enroll in Simplilearn’s exclusive Full Stack Web Development Certification Course and accelerate your career as a software developer. The program comprises a variety of software development courses, ranging from the fundamentals to advanced topics. 

Simplilearn also offers free online skill-up courses in several domains, from data science and business analytics to software development, AI, and machine learning. You can take up any of these courses to upgrade your skills and advance your career.

About the Author

SimplilearnSimplilearn

Simplilearn is one of the world’s leading providers of online training for Digital Marketing, Cloud Computing, Project Management, Data Science, IT, Software Development, and many other emerging technologies.

View More
  • Disclaimer
  • PMP, PMI, PMBOK, CAPM, PgMP, PfMP, ACP, PBA, RMP, SP, and OPM3 are registered marks of the Project Management Institute, Inc.