Skills you will learn

  • Fundamentals of Deep Learning and Neural Networks
  • Building and Training Artificial Neural Networks
  • Designing and Implementing Deep Neural Networks
  • Working with TensorFlow
  • Working with PyTorch for Model Development
  • Model Optimization
  • Practical Application of Deep Learning Frameworks

Who should learn

  • Software Developers
  • ML Engineers
  • Data Scientists
  • Students
  • Researchers

What you will learn

  • Deep Learning with Tensorflow and Pytorch

    • Lesson 01: Course Introduction

      05:03
      • 1.01 Course Introduction Foundations and Applications of Deep Learning with TensorFlow and PyTorch
        02:20
      • 1.02 Kickstarting Foundations and Applications of Deep Learning with TensorFlow and PyTorch
        02:43
    • Lesson 02: Learning Objectives

      01:09
      • 2.01 Learning Objectives
        01:09
    • Lesson 03: Introduction to Deep Learning

      34:07
      • 3.01 Brief History of AI​
        05:45
      • 3.02 Motivation for Deep Learning​
        02:20
      • 3.03 Deep Learning​
        01:15
      • 3.04 Deep Learning vs Machine Learning​
        02:29
      • 3.05 Breakthroughs in Deep Learning Foundations and Early Milestones 2012 2017
        06:03
      • 3.06 Deep Learning Successes in the Last Decade
        01:57
      • 3.07 Key Reasons to Learn Deep Learning
        03:04
      • 3.08 Applications of Deep Learning
        01:32
      • 3.09 Limitations of Deep Learning
        02:06
      • 3.10 Deep Learning Frameworks
        02:35
      • 3.11 Lifecycle of a Deep Learning Project
        05:01
    • Lesson 04: Artificial Neural Networks

      01:24:27
      • 4.01 Neurons
        03:14
      • 4.02 Neural Networks
        01:14
      • 4.03 Types of Neural Networks
        03:43
      • 4.04 Neural Network Architecture
        03:03
      • 4.05 Perceptron
        06:32
      • 4.06 Demo Perceptron Based Classification Model
        09:16
      • 4.07 Activation Function
        06:07
      • 4.08 ReLU vs Sigmoid Function
        01:27
      • 4.09 Demo Configuring Neural Network and Activation Function
        08:39
      • 4.10 Forward Propagation in Perceptron
        01:20
      • 4.11 What Is Loss Function​
        01:42
      • 4.12 What Is Cost Function
        02:32
      • 4.13 Backpropagation in Perceptron​
        04:43
      • 4.14 A Feed Forward Network​
        02:10
      • 4.15 Forward Pass​
        02:51
      • 4.16 Calculating Total Error​
        01:24
      • 4.17 Backward Pass​
        05:36
      • 4.18 Updated Weight ​
        01:33
      • 4.19 Hidden Layer Weight Assignment​
        03:47
      • 4.20 Vanishing Gradient​
        01:15
      • 4.21 Exploding Gradient​
        00:46
      • 4.22 Gradient Desent
        04:07
      • 4.23 Gradient Ascent​
        04:23
      • 4.24 The Learning Rate​
        01:09
      • 4.25 Limitation of a Perceptron
        01:54
    • Lesson 05: Deep Neural Networks

      01:04:39
      • 5.01 Introduction to Deep Neural Network DNN​
        03:44
      • 5.02 Loss Function in DNN​
        02:11
      • 5.03 Loss Function and Its Major Categories Regression Loss
        00:53
      • 5.04 Types of Regression Loss Mean Absolute Error
        02:10
      • 5.05 Types of Regression Loss Mean Squared Error
        01:50
      • 5.06 MSE vs MAE
        01:47
      • 5.07 Backpropagation with MSE Binary and Multiclass Classification
        01:22
      • 5.08 Types of Classification Loss
        04:36
      • 5.09 Types of Cross Entropy Loss
        06:03
      • 5.10 Types of Classification Loss Hinge loss
        06:08
      • 5.11 Properties of Hinge Loss
        01:09
      • 5.12 Squared Hinge Loss
        01:13
      • 5.13 Forward Propagation in DNN​
        02:17
      • 5.14 Demo Working on Forward Propagation
        06:44
      • 5.15 Backward Propagation in DNN​
        02:50
      • 5.16 The Overfitting Problem
        02:13
      • 5.17 Regularization
        04:54
      • 5.18 Lesson End Project MNIST Image Classification
        12:35
    • Lesson 06: TensorFlow

      02:55:35
      • 6.01 Introduction to TensorFlow​
        04:21
      • 6.02 Why Is TensorFlow Necessary
        02:56
      • 6.03 Categories of TensorFlow APIs​
        04:26
      • 6.04 Applications of TensorFlow​
        04:22
      • 6.05 Demo Introduction to Tensors Part 1
        08:49
      • 6.06 Demo Introduction to Tensors Part 2
        12:41
      • 6.07 Demo Hands on with TensorFlow Part 1
        11:17
      • 6.08 Demo Hands on with TensorFlow Part 2
        13:00
      • 6.09 Demo Training DNN Using TensorFlow
        11:47
      • 6.10 Installation of TensorFlow
        03:31
      • 6.11 TensorFlow Playground
        01:29
      • 6.12 Hands on with TensorFlow Playground
        04:41
      • 6.13 TFLearn​
        05:58
      • 6.14 Built In Operations of TFLearn
        05:09
      • 6.15 Visualization
        02:51
      • 6.16 Introduction to Keras
        01:34
      • 6.17 Keras Supported Frameworks and Key Features
        01:10
      • 6.18 Keras Backends
        01:28
      • 6.19 Advantages of Keras
        02:41
      • 6.20 Keras API Components Layers and Models
        01:50
      • 6.21 Sequential and Functional API in Keras
        03:30
      • 6.22 Demo Sequential APIs in TensorFlow Part 1
        13:35
      • 6.23 Demo Sequential APIs in TensorFlow Part 2
        13:41
      • 6.24 Demo Functional APIs in TensorFlow Part 1
        09:18
      • 6.25 Demo Functional APIs in TensorFlow Part 2
        10:55
      • 6.26 Creating a Keras Model
        06:01
      • 6.27 Implementation of Loss Function
        00:37
      • 6.28 Demo Hands on with TensorFlow and Keras
        11:57
    • Lesson 07: PyTorch

      32:11
      • 7.01 Introduction to PyTorch ​
        02:01
      • 7.02 PyTorch vs Keras​
        02:02
      • 7.03 Industrial Use Cases of PyTorch and Keras​
        01:15
      • 7.04 PyTorch Key Characteristics and Emerging Trends
        01:51
      • 7.05 PyTorch Ecosystems​
        01:27
      • 7.06 Installation of PyTorch
        01:19
      • 7.07 PyTorch Tensors​
        01:44
      • 7.08 Modules in PyTorch​
        03:58
      • 7.09 Building a DL Model with the Fashion MNIST Dataset​
        00:58
      • 7.10 First Three Steps of Building a Deep Learning Model
        04:48
      • 7.11 Last Two Steps of Building a Deep Learning
        05:10
      • 7.12 Example MNIST Digit Classifier​
        05:38
    • Lesson 08: Model Optimization and Performance Improvement

      03:06:57
      • 8.01 Optimization Essentials Concepts Examples and Algorithms
        03:13
      • 8.02 Importance of Optimization Algorithms
        01:57
      • 8.03 Overview of Optimizers Meaning Types and Examples
        03:14
      • 8.04 Introduction to Gradient Descent​
        06:16
      • 8.05 Stochastic Gradient Descent
        03:06
      • 8.06 Mini Batch SGD and How It Differs from Gradient Descent
        02:17
      • 8.07 Demo Implementation of SGD Part 1
        12:15
      • 8.08 Demo Implementation of SGD Part 2
        07:00
      • 8.09 Introduction to Momentum
        03:37
      • 8.10 SGD and NAG with Momentum
        03:57
      • 8.11 Demo Implementation of Momentum Part 1
        09:49
      • 8.12 Demo Implementation of Momentum Part 2
        08:13
      • 8.13 Introduction to AdaGrad​
        05:05
      • 8.14 Demo Implementation of AdaGrad
        13:06
      • 8.15 Introduction to RMSProp
        04:25
      • 8.16 Demo Implementation of RMSProp
        06:06
      • 8.17 Adadelta Selecting the Right Algorithm​
        01:40
      • 8.18 Adadelta Equation​
        02:52
      • 8.19 Adaptive Optimizers Adadelta and Adam Overview
        02:00
      • 8.20 Demo Implementation of Adadelta
        06:12
      • 8.21 Adam Optimizer​
        05:00
      • 8.22 Demo Implementation of Adam
        06:32
      • 8.23 Data Preprocessing and Normalization
        02:09
      • 8.24 Batch Normalization Process​
        02:17
      • 8.25 Implementing and Applying Batch Normalization Using Keras
        01:33
      • 8.26 Reguralization
        00:36
      • 8.27 Regularization in Machine Learning Purpose and Types
        01:24
      • 8.28 Improving Model Training Loss Sampling and Training Strategies
        04:14
      • 8.29 Dropout in Neural Networks Concepts and Usage
        02:15
      • 8.30 Dropout in Neural Networks Usage and Best Practices
        05:27
      • 8.31 Demo Implementation of Dropout
        09:11
      • 8.32 Vanishing Gradient
        02:56
      • 8.33 Exploding Gradient
        01:32
      • 8.34 Hyperparameter Tuning​
        06:04
      • 8.35 Manual Hyperparameter Selection and Tuning
        01:51
      • 8.36 Data Partitioning for Hyperparameter Selection
        01:51
      • 8.37 Hyperparameter Tuning Techniques Grid Search
        02:21
      • 8.38 Hyperparameter Tuning Techniques Random Search
        02:16
      • 8.39 Advanced Hyperparameter Optimization Techniques
        01:32
      • 8.40 Interpretability What It Is and Why It Matters
        01:06
      • 8.41 Model Interpretability Concepts Methods and Evaluation
        05:30
      • 8.42 Explainability Meaning and Effectiveness
        02:10
      • 8.43 Interpretability vs Explainability​
        01:01
      • 8.44 Lesson End Project Implementation Hyperparameter Tuning
        09:49
    • Lesson 09: Key Takeaways

      01:31
      • 9.01 Key Takeaways
        01:31
      • Knowledge Check
About the Course:

This course will help you learn deep learning with two popular tools: TensorFlow and PyTorch. You'll begin with the basics, learn about neural networks, and practice building and improving models. Whether you're just starting out or want to sharpen your skills, you'll get the knowledge and experience to create real AI applications.

Topics Covered:

  • Lesson 01: Course Introduction This lesson gives you an overview and

    Read More
For Business

Get your team a Digital Skilling Library with
unlimited access to live classes.

People Frame

Get a Completion Certificate

Share your certificate with prospective employers and your professional network on LinkedIn.

FAQs

  • Is this course free?

    Yes. Access to all lessons and the professional certificate is completely free.

  • Is this course suitable for beginners?

    Yes. It starts with deep learning fundamentals and is accessible to those with basic programming and math knowledge.

  • Are there prerequisites?

    Basic Python and general machine learning concepts are helpful, but the course builds your knowledge from scratch.

  • Does this course cover TensorFlow and PyTorch?

    Yes. The course includes dedicated lessons for both frameworks.

  • What is the difference between the TensorFlow and PyTorch ?

    TensorFlow is optimized for production and scalability; PyTorch is preferred for flexibility and experimentation.

  • Is model optimization covered in the course?

    Yes. Lesson 08 covers tuning, regularization, and performance improvement.

  • Will I learn about Neural Networks?

    Yes. Lesson 04 covers Artificial Neural Networks (ANNs), and Lesson 05 covers Deep Neural Networks (DNNs).

  • Will I learn about Backpropagation in detail?

    Yes. The technical modules break down the Chain Rule and how error signals are propagated backward through the network to update weights during training..

  • Can I add the certificate to LinkedIn?

    Yes. The certificate can be added directly to your LinkedIn Licenses and Certifications section.

  • How long is the course?

    Approximately 10 hours. It is fully self-paced.

  • Is this course mobile-friendly?

    Yes. The course is accessible on both desktop and mobile devices.

  • Does the course discuss model deployment?

    The course primarily focuses on building and optimizing models. While it highlights TensorFlow's production capabilities, the core focus is on architecture and performance tuning.

  • What loss functions are explored?

    The course explains when to apply specific loss functions, such as Mean Squared Error (MSE) for regression and Categorical Cross-Entropy for multi-class classification.

  • Does the course cover Gradient Descent variants?

    Yes. You will explore the mechanics of Stochastic Gradient Descent (SGD), Adam, and RMSprop, focusing on how they manage convergence and avoid local minima.

  • Is a certificate provided after completing the course?

    Yes. You receive a professional certificate upon completion to showcase your skills

  • Acknowledgement
  • PMP, PMI, PMBOK, CAPM, PgMP, PfMP, ACP, PBA, RMP, SP, OPM3 and the PMI ATP seal are the registered marks of the Project Management Institute, Inc.
  • *All trademarks are the property of their respective owners and their inclusion does not imply endorsement or affiliation.
  • Career Impact Results vary based on experience and numerous factors.