The rise of Artificial Intelligence (AI) and deep learning has propelled the growth of TensorFlow, an open-source AI library that allows for data flow graphs to build models. If you want to pursue a career in AI, knowing the basics of TensorFlow is crucial. This tutorial from Simplilearn can help you get started.

## Prerequisites for Tensorflow Tutorial

You should have good knowledge of some programming language—preferably Python. It is also important to have an understanding of machine learning to understand the use case and examples.

Before directly understanding what is TensorFlow, you should know about deep learning and its libraries.

## What is Deep Learning?

Deep learning is a subset of machine learning, and it works on the structure and functions similarly to the human brain. It learns from data that is unstructured and uses complex algorithms to train a neural net.

We primarily use neural networks in deep learning, which is based on AI. Here, we train networks to recognize text, numbers, images, voice, and so on. Unlike traditional machine learning, the data here is far more complicated, unstructured, and varied, such as images, audio, or text files. One of the core components of deep learning is the neural network, which typically looks like the image shown below:

As seen above, there is an input layer, an output layer, and in between, there are several hidden layers. For any neural network, there would be at least one hidden layer. A deep neural network is one that has more than one hidden layer.

Let us explore the different layers in more detail.

### Input Layer

The input layer accepts large volumes of data as input to build the neural network. The data can be in the form of text, image, audio, etc.

### Hidden Layer

This layer processes data by performing complex computations and carries out feature extraction. As part of the training, these layers have weights and biases that are continuously updated until the training process is complete. Each neuron has multiple weights and one bias. After computation, the values are passed to the output layer.

### Output Layer

The output layer generates predicted output by applying suitable activation functions. The output can be in the form of numeric or categorical values.

For example, if it is an image classification application, it tells us which class a particular image may belong to. The input can be multiple images, such as cats and dogs. The output can be in the form of binary classification like the number zero for the dog and the number one for the cat.

The network can be extended with multiple neurons on the output side to have many more classes. It can also be used for regression and time series problems.

There are a few prerequisites needed for developing a deep learning application. You need a strong knowledge of Python, but it’s also helpful to know other programming languages, such as R, Java, or C++.

We have discussed the top deep learning libraries in the next section of What is the TensorFlow article.

## Top Deep Learning Libraries

There are some libraries that are readily available, primarily for performing machine learning and deep learning programming. Some of the most common libraries are as follows:

### Keras

- Developed by Francois Chollet
- The open-source library is written in Python

### Theano

- Developed by the University of Montreal
- Written in Python

### TensorFlow

- Developed by Google Brain Team
- Written in C++, Python, and CUDA

### DL4J

- Developed by the Skymind engineering team and DeepLearning4J community
- Written in C++ and Java

### Torch

- Created by Ronan Collobert, Koray Kavukcuoglu, and Clement Farabet
- Written in Python

There are multiple libraries available to the user. But in this tutorial, we will focus on Google’s TensorFlow, an open-source library, which is currently a popular choice. Keras, which was also once a popular choice, has now been integrated with TensorFlow.

TensorFlow supports multiple languages, though Python is by far the most suitable and commonly used.

Now that you understood some of the basics, we can discuss what is TensorFlow.

## What is TensorFlow?

TensorFlow is an open-source library developed by Google primarily for deep learning applications. It also supports traditional machine learning. TensorFlow was originally developed for large numerical computations without keeping deep learning in mind. However, it proved to be very useful for deep learning development as well, and therefore Google open-sourced it.

TensorFlow accepts data in the form of multi-dimensional arrays of higher dimensions called tensors. Multi-dimensional arrays are very handy in handling large amounts of data.

TensorFlow works on the basis of data flow graphs that have nodes and edges. As the execution mechanism is in the form of graphs, it is much easier to execute TensorFlow code in a distributed manner across a cluster of computers while using GPUs.

The next part of the What is TensorFlow tutorial focuses on why should we use TensorFlow.

## History of TensorFlow

TensorFlow was first made public in 2015, while the first stable version appeared on February 11, 2017. It is created and maintained by Google. Ever since then, it has become one of the most popular frameworks for deep learning and machine learning projects. It has a vast library for large-scale machine learning and numerical computation. Here are some more milestones of TensorFlow:

- In December 2017, Kubeflow is released for the operation and deployment of TensorFlow on Kubernetes.
- In March 2018, TensorFlow 1.0 was released for machine learning in JavaScript.
- In Jan 2019, TensorFlow 2.0 is released which adds a number of components to TensorFlow.
- In May 2019, TensorFlow Graphic was released for deep learning in computer graphics.

## Why TensorFlow?

### TensorFlow Offers Both C++ and Python API’s

Before the development of libraries, the coding mechanism for machine learning and deep learning was much more complicated. This library provides a high-level API, and complex coding isn’t needed to prepare a neural network, configure a neuron, or program a neuron. The library completes all of these tasks. TensorFlow also has integration with Java and R.

### TensorFlow Supports Both CPUs and GPUs Computing Devices

Deep learning applications are very complicated, with the training process requiring a lot of computation. It takes a long time because of the large data size, and it involves several iterative processes, mathematical calculations, matrix multiplications, and so on. If you perform these activities on a normal Central Processing Unit (CPU), typically it would take much longer.

Graphical Processing Units (GPUs) are popular in the context of games, where you need the screen and image to be of high resolution. GPUs were originally designed for this purpose. However, they are being used for developing deep learning applications as well.

One of the major advantages of TensorFlow is that it supports GPUs, as well as CPUs. It also has a faster compilation time than other deep learning libraries, like Keras and Torch.

You’ll know about Tensors in the following section of the What is TensorFlow article.

## How TensorFlow Works

TensorFlow allows you to create dataflow graphs that describe how data moves through a graph. The graph consists of nodes that represent a mathematical operation. A connection or edge between nodes is a multidimensional data array. It takes inputs as a multi-dimensional array where you can construct a flowchart of operations that can be performed on these inputs.

## TensorFlow Architecture

Tensorflow architecture works in three significant steps:

- Data pre-processing - structure the data and brings it under one limiting value
- Building the model - build the model for the data
- Training and estimating the model - use the data to train the model and test it with unknown data

## Where Can Tensorflow Run?

TensorFlow requirements can be classified into the development phase (training the model) and run phase (running the model on different platforms). The model can be trained and used on GPUs as well as CPUs. Once the model has been trained, you can run it on:

- Desktop (Linux, Windows, macOS)
- Mobile devices (iOS and Android)
- Cloud as a web service

## Introduction to Components of TensorFlow

### Tensor

Tensor forms the core framework of TensorFlow. All the computations in TensorFlow involve tensors. It is a matrix of n-dimensions that represents multiple types of data. A tensor can be the result of a computation or it can originate from the input data.

Fig: Tensor (source)

### Graphs

Graphs describe all the operations that take place during the training. Each operation is called an op node and is connected to the other. The graph shows the op nodes and the connections between the nodes, but it does not display values.

Fig: Graphs (source)

## What are Tensors?

Tensor is a generalization of vectors and matrices of potentially higher dimensions. Arrays of data with varying dimensions and ranks that are fed as input to the neural network are called tensors.

For deep learning, especially in the training process, you will have large amounts of data that exist in a very complicated format. It helps when you are able to put, use, or store it in a compact way, which tensors provide, even if they appear in multi-dimensional arrays. When the data is stored in tensors and fed into the neural network, the output we get is as shown below:

There are some terms associated with tensors that we need to familiarize ourselves with:

### Dimension

Dimension is the size of the array elements. Below you can take a look at various types of dimensions:

### Ranks

Tensor ranks are the number of dimensions used to represent the data. For example:

Rank 0 - When there is only one element. We also call this as a scalar.

Example: s = [2000]

Rank 1 - This basically refers to a one-dimensional array called a vector.

Example: v = [10, 11, 12]

Rank 2 - This is traditionally known as a two-dimensional array or a matrix.

Example: m = [1,2,3],[4,5,6]

Rank 3 - It refers to a multidimensional array, generally referred to as tensor.

Example: t = [[[1],[2],[3]],[[4],[5],[6]],[[7],[8],[9]]]

Ranks can then be four or five, and so on.

The data flow graph is another important topic in What is TensorFlow article. It is discussed in the next section.

## What is a Data Flow Graph?

When we have the data stored in tensors, there are computations that need to be completed, which happens in the form of graphs.

Unlike traditional programming, where written code gets executed in sequence, here we build data flow graphs that consist of nodes. The graphs are then executed in the form of a session. It is important to remember that we first have to create a graph. When we do so, none of the code is actually getting executed. You execute that graph only by creating a session.

Each computation in TensorFlow is represented as a data flow graph below.

When you start creating a TensorFlow object, there will be a default graph. In more advanced programming, you can actually have multiple graphs instead of a default graph. You can create your own graph as well. The graph is executed and it processes all the data that is fed in. All the external data is fed in the form of placeholders, variables, and constants.

Once you have the graph, the execution can be enabled either on regular CPUs or GPUs, or distributed across several of them so that the processing becomes much faster. As the training of the models in deep learning takes extremely long because of the large amount of data, using TensorFlow makes it much easier to write the code for GPUs or CPUs and then execute it in a distributed manner.

When learning ‘What is TensorFlow?’ you should know about the program elements also.

## List of Prominent Algorithms Supported by TensorFlow

Here is the list of algorithms currently supported by TensorFlow:

- Classification - tf.estimator.LinearClassifier
- Linear regression - tf.estimator.LinearRegressor
- Boosted tree classification - tf.estimator.BoostedTreesClassifier
- Booster tree regression - tf.estimator.BoostedTreesRegressor
- Deep learning classification - tf.estimator.DNNClassifier
- Deep learning wipe and deep - tf.estimator.DNNLinearCombinedClassifier

## Program Elements in TensorFlow

TensorFlow programs work on two basic concepts:

- Building a computational graph
- Executing a computational graph

First, you need to start by writing the code for preparing the graph. Following this, you create a session where you execute this graph.

TensorFlow programming is slightly different from regular programming. Even if you're familiar with Python programming or machine learning programming in sci-kit-learn, this may be a new concept to you.

The way data is handled inside of the program itself is a little different from how it normally is with the regular programming language. For anything that keeps changing in regular programming, a variable needs to be created.

In TensorFlow, however, data can be stored and manipulated using three different programming elements:

- Constants
- Variables
- Placeholders

### Constants

Constants are parameters with values that do not change. To define a constant, we use tf.constant() command

Example:

a = tf.constant(2.0, tf.float32)

b = tf.constant(3.0)

Print(a, b)

In the case of constants, you cannot change their values during the computation.

### Variables

Variables allow us to add new trainable parameters to the graph. To define a variable, we use tf.Variable() command and initialize it before running the graph in a session.

Example:

W = tf.Variable([.3],dtype=tf.float32)

b = tf.Variable([-.3],dtype=tf.float32)

x = tf.placeholder(tf.float32)

linear_model = W*x+b

### Placeholders

Placeholders allow us to feed data to a TensorFlow model from outside a model. It permits value to be assigned later. To define a placeholder, we use the tf.placeholder() command.

Example:

a = tf.placeholder(tf.float32)

b = a*2

with tf.Session() as sess:

result = sess.run(b,feed_dict={a:3.0})

print result

Placeholders are a special type of the variable and can be a new concept for many of us. Placeholders are like variables, but they are used for feeding data from outside. Typically, when you are performing computations, you need to load data from a local file or from an image file, CSV file, etc. There is a provision with special types of variables, which can be fed on a regular basis. One of the reasons for having this kind of provision is that if you get the entire input in one shot, it may become very difficult to handle the memory.

There is a certain way of populating the placeholder called feed_dict, which specifies tensors that provide values to the placeholder.

In a nutshell, constants, variables, and placeholders handle data within the flow program, after which you have to create a graph and run a session.

Let us know more about Sessions in the following part of the What is TensorFlow article.

### Session

A session is run to evaluate the nodes. This is called as the TensorFlow Runtime.

Example:

a = tf.constant(5.0)

b = tf.constant(3.0)

c = a*b

# Launch Session

sess = tf.Session()

# Evaluate the tensor c

print(sess.run(c))

When creating a session, you run a particular computation, node, or an operation. Every variable or computation that you perform is like an operation on a node within a graph. Initially, the graph will be the default one. The moment you create a TensorFlow object, there is a default graph that doesn't contain any operations or nodes. The moment you assign variables, constants, or placeholders, each of them is known as an operation (in TensorFlow terms).

This is in contrast to traditional concepts, where creating a constant or a variable is not an operation. As seen in the example above, only the command ‘c = a*b’ would be an operation. But in TensorFlow, assigning variables or constants are operations as well. During a session, you can actually run all of these operations or nodes.

In our example, for the top three commands, you just create the graph, and execution doesn’t take place until you create a session (with the command sess = tf.session()).

## TensorFlow Program Basics

Let us take a look at the various examples of programs in TensorFlow in this section of the ‘What is TensorFlow’ article.

Here is the typical “Hello World” program in TensorFlow:

Next, you can see how to create variables, constants, or strings:

Let’s see how a placeholder is defined, and how to execute and populate the placeholder values:

Below is the code to perform computations using TensorFlow:

Next, you can see how matrix multiplication is done using TensorFlow:

Lastly, you can understand the TensorFlow graphs with the code shown below:

## Simple TensorFlow Example

Let us take the example of a basic arithmetic operation like an addition to create a graph. To do so, we need to call tf.add() and add two values, say a=2 and b=3, using TensorFlow. The code will be as follows:

The generated graph and variables are:

This example creates two input nodes (a=2 and b=3) and one output node for the addition operation. The variable c (the output Tensor of the operation) prints out the Tensor information - its name, shape, and type.

## Use Case Implementation Using TensorFlow

Let us finally discuss a use case that will sharpen your understanding of what is TensorFlow.

Problem Statement - To analyze various aspects of an individual and predict what class of income he belongs to (>50k or <=50k) by using census data.

The aspects under consideration are shown below:

We have to build a model for classifying whether the income of a particular individual is more or less than 50K annually and determine the accuracy. Refer to the aforementioned ‘What is TensorFlow’ video to understand how such a model is created.

### Learned What is TensorFlow? Take the Next Step

TensorFlow has made the implementation of machine learning and deep learning models far easier. While programming in TensorFlow is only a small part of the complicated world of deep learning, you should consider enhancing your knowledge by enrolling in our AI and Machine Learning courses. This course will take you on a journey through deep learning concepts, implementing deep learning algorithms, building neural networks, and much more.