Key features

MONEY BACK GUARANTEE

How this works :

At Simplilearn, we greatly value the trust of our patrons. Our courses were designed to deliver an effective learning experience, and have helped over half a million find their professional calling. But if you feel your course is not to your liking, we offer a 7-day money-back guarantee. Just send us a refund request within 7 days of purchase, and we will refund 100% of your payment, no questions asked!

For Self Placed Learning :

Raise refund request within 7 days of purchase of course. Money back guarantee is void if the participant has accessed more than 25% content.

For Instructor Led Training :

Raise refund request within 7 days of commencement of the first batch you are eligible to attend. Money back guarantee is void if the participant has accessed more than 25% content of an e-learning course or has attended Online Classrooms for more than 1 day.

  • 40 hours of instructor-led training
  • 24 hours of self-paced video
  • 5 real-life industry projects using Hadoop and Spark
  • Hands-on practice on CloudLab
  • Training on Yarn, MapReduce, Pig, Hive, Impala, HBase, and Apache Spark
  • Aligned to Cloudera CCA175 certification exam

Course description

  • What are the course objectives?

    The Big Data Hadoop and Spark developer course is designed to give you in-depth knowledge of the Big Data framework using Hadoop and Spark, including HDFS, YARN, and MapReduce. You will learn to use Pig, Hive, and Impala to process and analyze large datasets stored in the HDFS, and use Sqoop and Flume for data ingestion.

    You will master real-time data processing using Spark, including functional programming in Spark, implementing Spark applications, understanding parallel processing in Spark, and using Spark RDD optimization techniques. You will also learn the various interactive algorithms in Spark and use Spark SQL for creating, transforming, and querying data forms.

    As a part of the course, you will be required to execute real-life industry-based projects using CloudLab in the domains of banking, telecommunication, social media, insurance, and e-commerce.  This Big Data Hadoop training course will prepare you for the Cloudera CCA175 certification.

  • What skills will you learn?

    Big Data Hadoop training will enable you to master the concepts of the Hadoop framework and its deployment in a cluster environment. You will learn to:

    • Understand the different components of Hadoop ecosystem such as Hadoop 2.7, Yarn, MapReduce, Pig, Hive, Impala, HBase, Sqoop, Flume, and Apache Spark
    • Understand Hadoop Distributed File System (HDFS) and YARN architecture, and learn how to work with them for storage and resource management
    • Understand MapReduce and its characteristics and assimilate advanced MapReduce concepts
    • Ingest data using Sqoop and Flume
    • Create database and tables in Hive and Impala, understand HBase, and use Hive and Impala for partitioning
    • Understand different types of file formats, Avro Schema, using Arvo with Hive, and Sqoop and Schema evolution
    • Understand Flume, Flume architecture, sources, flume sinks, channels, and flume configurations
    • Understand and work with HBase, its architecture and data storage, and learn the difference between HBase and RDBMS
    • Gain a working knowledge of Pig and its components
    • Do functional programming in Spark, and implement and build Spark applications
    • Understand resilient distribution datasets (RDD) in detail
    • Gain an in-depth understanding of parallel processing in Spark and Spark RDD optimization techniques
    • Understand the common use cases of Spark and various interactive algorithms
    • Learn Spark SQL, creating, transforming, and querying data frames
    • Prepare for Cloudera Big Data CCA175 certification

  • Who should take this course?

    Big Data career opportunities are on the rise, and Hadoop is quickly becoming a must-know technology in Big Data architecture. Big Data training is best suited for IT, data management, and analytics professionals looking to gain expertise in Big Data, including:
     

    • Software Developers and Architects
    • Analytics Professionals
    • Senior IT professionals
    • Testing and Mainframe Professionals
    • Data Management Professionals
    • Business Intelligence Professionals
    • Project Managers
    • Aspiring Data Scientists
    • Graduates looking to build a career in Big Data Analytics

    Prerequisites:
    • As knowledge of Java is necessary for this course, we are providing a complimentary access to the “Java Essentials for Hadoop” course
    • For Spark, we use Python and Scala. An e-book is provided for support.
    • Knowledge of an operating system such as Linux is useful for this course

  • What is CloudLab?

    CloudLab is a cloud-based Hadoop and Spark environment lab that Simplilearn offers with the course to ensure a hassle-free execution of your hands-on projects. There is no need to install and maintain Hadoop or Spark on a virtual machine. Instead, you’ll be able to access a preconfigured environment on CloudLab via your browser. This environment is very similar to what companies are using today to optimize Hadoop installation scalability and availability.

    You’ll have access to CloudLab from the Simplilearn LMS (Learning Management System) for the duration of the course. You can learn more about CloudLab by viewing our CloudLab video.

  • What projects are included in this course?

    The course includes five real-life, industry-based projects on CloudLab. Successful evaluation of one of the following two projects is a part of the certification eligibility criteria.

    Project 1
    Domain- Banking
    Description: A Portuguese banking institution ran a marketing campaign to convince potential customers to invest in a bank term deposit. Their marketing campaigns were conducted through phone calls, and sometimes the same customer was contacted more than once. Your job is to analyze the data collected from the marketing campaign.

    Project 2
    Domain- Telecommunication
    Description: A mobile phone service provider has launched a new Open Network campaign. The company has invited users to raise complaints about the towers in their locality if they face issues with their mobile network. The company has collected the dataset of users who raised a complaint. The fourth and the fifth field of the dataset has a latitude and longitude of users, which is important information for the company. You must find this latitude and longitude information on the basis of the available dataset and create three clusters of users with a k-means algorithm.

    For additional practice, we have three more projects to help you start your Hadoop and Spark journey.

    Project 3
    Domain- Social Media
    Description: As part of a recruiting exercise, a major social media company asked candidates to analyze a dataset from Stack Exchange. You will be using the dataset to arrive at certain key insights.

    Project 4
    Domain- Website providing movie-related information
    Description: IMDB is an online database of movie-related information. IMDB users rate movies on a scale of 1 to 5 -- 1 being the worst and 5 being the best -- and provide reviews. The dataset also has additional information, such as the release year of the movie. You are tasked to analyze the data collected.

    Project 5
    Domain- Insurance
    Description: A US-based insurance provider has decided to launch a new medical insurance program targeting various customers. To help a customer understand the the market better, you must perform a series of data analyses using Hadoop.

Course preview

    • Lesson 00 - Course Introduction 04:10
      • 0.1 Introduction04:10
    • Lesson 01 - Introduction to Big data and Hadoop Ecosystem 15:43
      • 1.1 Introduction00:38
      • 1.2 Overview to Big Data and Hadoop05:13
      • 1.3 Pop Quiz
      • 1.4 Hadoop Ecosystem08:57
      • 1.5 Quiz
      • 1.6 Key Takeaways00:55
    • Lesson 02 - HDFS and YARN 47:08
      • 2.1 Introduction06:10
      • 2.2 HDFS Architecture and Components08:59
      • 2.3 Pop Quiz
      • 2.4 Block Replication Architecture09:53
      • 2.5 YARN Introduction21:25
      • 2.6 Quiz
      • 2.7 Key Takeaways00:41
      • 2.8 Hands-on Exercise
    • Lesson 03 - MapReduce and Scoop 57:00
      • 3.1 Introduction00:41
      • 3.2 Why Mapreduce11:57
      • 3.3 Small Data and Big Data15:53
      • 3.4 Pop Quiz
      • 3.5 Data Types in Hadoop04:23
      • 3.6 Joins in MapReduce04:43
      • 3.7 What is Sqoop18:21
      • 3.8 Quiz
      • 3.9 Key Takeaways01:02
      • 3.10 Hands-on Exercise
    • Lesson 04 - Basics of Hive and Impala 19:00
      • 4.1 Introduction04:07
      • 4.2 Pop Quiz
      • 4.3 Interacting with Hive and Impala14:07
      • 4.4 Quiz
      • 4.5 Key Takeaways00:46
    • Lesson 05 - Working with Hive and Impala 28:36
      • 5.1 Working with Hive and Impala07:08
      • 5.2 Pop Quiz
      • 5.3 Data Types in Hive07:47
      • 5.4 Validation of Data07:47
      • 5.5 What is Hcatalog and Its Uses05:25
      • 5.6 Quiz
      • 5.7 Key Takeaways00:29
      • 5.8 Hands-on Exercise
    • Lesson 06 - Types of Data Formats 14:35
      • 6.1 Introduction00:44
      • 6.2 Types of File Format02:35
      • 6.3 Pop Quiz
      • 6.4 Data Serialization03:11
      • 6.5 Importing MySql and Creating hivetb04:32
      • 6.6 Parquet With Sqoop02:37
      • 6.7 Quiz
      • 6.8 Key Takeaways00:56
      • 6.9 Hands-on Exercise
    • Lesson 07 - Advanced Hive Concept and Data File Partitioning 17:00
      • 7.1 Introduction07:41
      • 7.2 Pop Quiz
      • 7.3 Overview of the Hive Query Language08:18
      • 7.4 Quiz
      • 7.5 Key Takeaways01:01
      • 7.6 Hands-on Exercise
    • Lesson 08 - Apache Flume and HBase 28:06
      • 8.1 Introduction12:29
      • 8.2 Pop Quiz
      • 8.3 Introduction to HBase14:40
      • 8.4 Quiz
      • 8.5 Key Takeaways00:57
      • 8.6 Hands-on Exercise
    • Lesson 09 - Pig 18:08
      • 9.1 Introduction10:45
      • 9.2 Pop Quiz
      • 9.3 Getting Datasets for Pig Development06:45
      • 9.4 Quiz
      • 9.5 Key Takeaways00:38
      • 9.6 Hands-on Exercise
    • Lesson 10 - Basics of Apache Spark 39:54
      • 10.1 Introduction16:04
      • 10.2 Spark - Architecture, Execution, and Related Concepts07:10
      • 10.3 Pop Quiz
      • 10.4 RDD Operations10:39
      • 10.5 Functional Programming in Spark05:34
      • 10.6 Quiz
      • 10.7 Key Takeaways00:27
      • 10.8 Hands-on Exercise
    • Lesson 11 - RDDs in Spark 16:09
      • 11.1 Introduction00:46
      • 11.2 RDD Data Types and RDD Creation10:14
      • 11.3 Pop Quiz
      • 11.4 Operations in RDDs04:35
      • 11.5 Quiz
      • 11.6 Key Takeaways00:34
      • 11.7 Hands-on Exercise
    • Lesson 12 - Implementation of Spark Applications 13:54
      • 12.1 Introduction03:57
      • 12.2 Running Spark on YARN01:27
      • 12.3 Pop Quiz
      • 12.4 Running a Spark Application01:47
      • 12.5 Dynamic Resource Allocation01:06
      • 12.6 Configuring Your Spark Application04:24
      • 12.7 Quiz
      • 12.8 Key Takeaways01:13
    • Lesson 13 - Spark Parallel Processing 08:40
      • 13.1 Introduction05:41
      • 13.2 Pop Quiz
      • 13.3 Parallel Operations on Partitions02:28
      • 13.4 Quiz
      • 13.5 Key Takeaways00:31
      • 13.6 Hands-on Exercise
    • Lesson 14 - Spark RDD Optimization Techniques 14:23
      • 14.1 Introduction04:40
      • 14.2 Pop Quiz
      • 14.3 RDD Persistence08:59
      • 14.4 Quiz
      • 14.5 Key Takeaways00:44
      • 14.6 Hands-on Exercise
    • Lesson 15 - Spark Algorithm 27:09
      • 15.1 Introduction00:49
      • 15.2 Spark: An Iterative Algorithm03:13
      • 15.3 Introduction To Graph Parallel System02:34
      • 15.4 Pop Quiz
      • 15.5 Introduction To Machine Learning10:27
      • 15.6 Introduction To Three C's08:07
      • 15.7 Quiz
      • 15.8 Key Takeaways01:59
    • What’s next? 05:28
      • The Next Step05:28
    • Lesson 16 - Spark SQL 13:21
      • 16.1 Introduction06:36
      • 16.2 Pop Quiz
      • 16.3 Interoperating with RDDs06:08
      • 16.4 Quiz
      • 16.5 Key Takeaways00:37
      • 16.6 Hands-on Exercise
    • Projects
      • Project For Submission
      • Projects with solutions
    • Simulation Test Paper Instructions 00:20
      • Instructions00:20
    • Course Feedback
      • Course Feedback
    • Lesson 00 - Course introduction 01:35
      • 0.1 Course Introduction00:11
      • 0.2 Course Objectives00:20
      • 0.3 Course Overview00:18
      • 0.4 Target Audience00:17
      • 0.5 Prerequisites00:14
      • 0.6 Lessons Covered00:08
      • 0.7 Conclusion00:07
    • Lesson 01 - Big Data Overview 18:21
      • 1.1 Lesson 1—Big Data Overview00:08
      • 1.2 Objectives00:21
      • 1.3 Big Data—Introduction00:25
      • 1.4 The Three Vs of Big Data00:14
      • 1.5 Data Volume00:34
      • 1.6 Data Sizes00:28
      • 1.7 Data Velocity00:49
      • 1.8 Data Variety00:38
      • 1.9 Data Evolution00:54
      • 1.10 Features of Big data00:50
      • 1.11 Industry Examples01:42
      • 1.12 Big Data Analysis00:39
      • 1.13 Technology Comparison01:05
      • 1.14 Stream00:50
      • 1.15 Apache Hadoop00:55
      • 1.16 Hadoop Distributed File System00:58
      • 1.17 MapReduce00:43
      • 1.18 Real-Time Big Data Tools00:13
      • 1.19 Apache Kafka00:19
      • 1.20 Apache Storm00:26
      • 1.21 Apache Spark00:56
      • 1.22 Apache Cassandra00:55
      • 1.23 Apache Hbase00:22
      • 1.24 Real-Time Big Data Tools—Uses00:26
      • 1.25 Real-Time Big Data—Use Cases01:32
      • 1.26 Quiz
      • 1.27 Summary00:53
      • 1.28 Conclusion00:06
    • Lesson 02 - Introduction to Zookeeper 24:27
      • 2.1 Introduction to ZooKeeper00:10
      • 2.2 Objectives00:26
      • 2.3 ZooKeeper—Introduction00:30
      • 2.4 Distributed Applications01:06
      • 2.5 Challenges of Distributed Applications00:17
      • 2.6 Partial Failures00:41
      • 2.7 Race Conditions00:40
      • 2.8 Deadlocks00:41
      • 2.9 Inconsistencies00:48
      • 2.10 ZooKeeper Characteristics00:53
      • 2.11 ZooKeeper Data Model00:42
      • 2.12 Types of Znodes00:38
      • 2.13 Sequential Znodes00:32
      • 2.14 VMware00:29
      • 2.15 Simplilearn Virtual Machine00:23
      • 2.16 PuTTY00:22
      • 2.17 WinSCP00:19
      • 2.18 Demo—Install and Setup VM00:06
      • 2.19 Demo—Install and Setup VM08:12
      • 2.20 ZooKeeper Installation00:20
      • 2.21 ZooKeeper Configuration00:18
      • 2.22 ZooKeeper Command Line Interface00:27
      • 2.23 ZooKeeper Command Line Interface Commands01:07
      • 2.24 ZooKeeper Client APIs00:30
      • 2.25 ZooKeeper Recipe 1: Handling Partial Failures00:58
      • 2.26 ZooKeeper Recipe 2: Leader Election02:09
      • 2.27 Quiz
      • 2.28 Summary00:35
      • 2.29 Conclusion00:08
    • Lesson 03 - Introduction to Kafka 16:01
      • 3.1 Lesson 3 Introduction to Kafka00:09
      • 3.2 Objectives00:19
      • 3.3 Apache Kafka—Introduction00:23
      • 3.4 Kafka History00:30
      • 3.5 Kafka Use Cases00:48
      • 3.6 Aggregating User Activity Using Kafka—Example00:43
      • 3.7 Kafka Data Model01:27
      • 3.8 Topics01:15
      • 3.9 Partitions00:36
      • 3.10 Partition Distribution00:48
      • 3.11 Producers00:48
      • 3.12 Consumers00:46
      • 3.13 Kafka Architecture01:10
      • 3.14 Types of Messaging Systems00:42
      • 3.15 Queue System—Example00:37
      • 3.16 Publish-Subscribe System—Example00:34
      • 3.17 Brokers00:24
      • 3.18 Kafka Guarantees00:58
      • 3.19 Kafka at LinkedIn00:54
      • 3.20 Replication in Kafka00:44
      • 3.21 Persistence in Kafka00:41
      • 3.22 Quiz
      • 3.23 Summary00:38
      • 3.24 Conclusion00:07
    • Lesson 04 - Installation and Configuration 08:53
      • 4.1 Lesson 4—Installation and Configuration00:10
      • 4.2 Objectives00:22
      • 4.3 Kafka Versions00:49
      • 4.4 OS Selection00:19
      • 4.5 Machine Selection00:34
      • 4.6 Preparing for Installation00:19
      • 4.7 Demo 1—Kafka Installation and Configuration00:05
      • 4.8 Demo 1—Kafka Installation and Configuration00:05
      • 4.9 Demo 2—Creating and Sending Messages00:05
      • 4.10 Demo 2—Creating and Sending Messages00:05
      • 4.11 Stop the Kafka Server00:40
      • 4.12 Setting up Multi-Node Kafka Cluster—Step 100:24
      • 4.13 Setting up Multi-Node Kafka Cluster—Step 200:59
      • 4.14 Setting up Multi-Node Kafka Cluster—Step 301:04
      • 4.15 Setting up Multi-Node Kafka Cluster—Step 400:36
      • 4.16 Setting up Multi-Node Kafka Cluster—Step 500:29
      • 4.17 Setting up Multi-Node Kafka Cluster—Step 601:08
      • 4.18 Quiz
      • 4.19 Summary00:33
      • 4.20 Conclusion00:07
    • Lesson 05 - Kafka Interfaces 18:17
      • 5.1 Lesson 5—Kafka Interfaces00:09
      • 5.2 Objectives00:18
      • 5.3 Kafka Interfaces—Introduction00:21
      • 5.4 Creating a Topic01:23
      • 5.5 Modifying a Topic00:36
      • 5.6 kafka-topics.sh Options00:57
      • 5.7 Creating a Message00:15
      • 5.8 kafka-console-producer.sh Options01:48
      • 5.9 Creating a Message—Example 101:01
      • 5.10 Creating a Message—Example 200:39
      • 5.11 Reading a Message00:21
      • 5.12 kafka-console-consumer.sh Options01:32
      • 5.13 Reading a Message—Example00:44
      • 5.14 Java Interface to Kafka00:18
      • 5.15 Producer Side API00:42
      • 5.16 Producer Side API Example—Step 100:32
      • 5.17 Producer Side API Example—Step 200:15
      • 5.18 Producer Side API Example—Step 300:21
      • 5.19 Producer Side API Example—Step 400:21
      • 5.20 Producer Side API Example—Step 500:17
      • 5.21 Consumer Side API00:37
      • 5.22 Consumer Side API Example—Step 100:21
      • 5.23 Consumer Side API Example—Step 200:15
      • 5.24 Consumer Side API Example—Step 300:20
      • 5.25 Consumer Side API Example—Step 400:25
      • 5.26 Consumer Side API Example—Step 500:25
      • 5.27 Compiling a Java Program00:29
      • 5.28 Running the Java Program00:18
      • 5.29 Java Interface Observations00:39
      • 5.30 Exercise 1—Tasks00:05
      • 5.31 Exercise 1—Tasks (contd.)00:05
      • 5.32 Exercise 1—Solutions00:05
      • 5.33 Exercise 1—Solutions (contd.)00:05
      • 5.34 Exercise 1—Solutions (contd.)00:05
      • 5.35 Exercise 2—Tasks00:05
      • 5.36 Exercise 2—Tasks (contd.)00:05
      • 5.37 Exercise 2—Solutions00:05
      • 5.38 Exercise 2—Solutions (contd.)00:05
      • 5.39 Exercise 2—Solutions (contd.)00:05
      • 5.40 Exercise 2—Solutions (contd.)00:05
      • 5.41 Exercise 2—Solutions (contd.)00:05
      • 5.42 Quiz
      • 5.43 Summary00:30
      • 5.44 Thank You00:08
    • Lesson 01 - Essentials of Java for Hadoop 31:10
      • 1.1 Essentials of Java for Hadoop00:19
      • 1.2 Lesson Objectives00:24
      • 1.3 Java Definition00:27
      • 1.4 Java Virtual Machine (JVM)00:34
      • 1.5 Working of Java01:01
      • 1.6 Running a Basic Java Program00:56
      • 1.7 Running a Basic Java Program (contd.)01:15
      • 1.8 Running a Basic Java Program in NetBeans IDE00:11
      • 1.9 BASIC JAVA SYNTAX00:12
      • 1.10 Data Types in Java00:26
      • 1.11 Variables in Java01:31
      • 1.12 Naming Conventionsof Variables01:21
      • 1.13 Type Casting.01:05
      • 1.14 Operators00:30
      • 1.15 Mathematical Operators00:28
      • 1.16 Unary Operators.00:15
      • 1.17 Relational Operators00:19
      • 1.18 Logical or Conditional Operators00:19
      • 1.19 Bitwise Operators01:21
      • 1.20 Static Versus Non Static Variables00:54
      • 1.21 Static Versus Non Static Variables (contd.)00:17
      • 1.22 Statements and Blocks of Code01:21
      • 1.23 Flow Control00:47
      • 1.24 If Statement00:40
      • 1.25 Variants of if Statement01:07
      • 1.26 Nested If Statement00:40
      • 1.27 Switch Statement00:36
      • 1.28 Switch Statement (contd.)00:34
      • 1.29 Loop Statements01:19
      • 1.30 Loop Statements (contd.)00:49
      • 1.31 Break and Continue Statements00:44
      • 1.32 Basic Java Constructs01:09
      • 1.33 Arrays01:16
      • 1.34 Arrays (contd.)01:07
      • 1.35 JAVA CLASSES AND METHODS00:09
      • 1.36 Classes00:46
      • 1.37 Objects01:21
      • 1.38 Methods01:01
      • 1.39 Access Modifiers00:49
      • 1.40 Summary00:41
      • 1.41 Thank You00:09
    • Lesson 02 - Java Constructors 21:31
      • 2.1 Java Constructors00:22
      • 2.2 Objectives00:42
      • 2.3 Features of Java01:08
      • 2.4 Classes Objects and Constructors01:19
      • 2.5 Constructors00:34
      • 2.6 Constructor Overloading01:08
      • 2.7 Constructor Overloading (contd.)00:28
      • 2.8 PACKAGES00:09
      • 2.9 Definition of Packages01:12
      • 2.10 Advantages of Packages00:29
      • 2.11 Naming Conventions of Packages00:28
      • 2.12 INHERITANCE00:09
      • 2.13 Definition of Inheritance01:07
      • 2.14 Multilevel Inheritance01:15
      • 2.15 Hierarchical Inheritance00:23
      • 2.16 Method Overriding00:55
      • 2.17 Method Overriding(contd.)00:35
      • 2.18 Method Overriding(contd.)00:15
      • 2.19 ABSTRACT CLASSES00:10
      • 2.20 Definition of Abstract Classes00:41
      • 2.21 Usage of Abstract Classes00:36
      • 2.22 INTERFACES00:08
      • 2.23 Features of Interfaces01:03
      • 2.24 Syntax for Creating Interfaces00:24
      • 2.25 Implementing an Interface00:23
      • 2.26 Implementing an Interface(contd.)00:13
      • 2.27 INPUT AND OUTPUT00:14
      • 2.28 Features of Input and Output00:49
      • 2.29 System.in.read() Method00:20
      • 2.30 Reading Input from the Console00:31
      • 2.31 Stream Objects00:21
      • 2.32 String Tokenizer Class00:43
      • 2.33 Scanner Class00:32
      • 2.34 Writing Output to the Console00:28
      • 2.35 Summary01:03
      • 2.36 Thank You00:14
    • Lesson 03 - Essential Classes and Exceptions in Java 28:37
      • 3.1 Essential Classes and Exceptions in Java00:18
      • 3.2 Objectives00:31
      • 3.3 The Enums in Java01:00
      • 3.4 Program Using Enum00:44
      • 3.5 ArrayList00:41
      • 3.6 ArrayList Constructors00:38
      • 3.7 Methods of ArrayList01:02
      • 3.8 ArrayList Insertion00:47
      • 3.9 ArrayList Insertion (contd.)00:38
      • 3.10 Iterator00:39
      • 3.11 Iterator (contd.)00:33
      • 3.12 ListIterator00:46
      • 3.13 ListIterator (contd.)01:00
      • 3.14 Displaying Items Using ListIterator00:32
      • 3.15 For-Each Loop00:35
      • 3.16 For-Each Loop (contd.)00:23
      • 3.17 Enumeration00:30
      • 3.18 Enumeration (contd.)00:25
      • 3.19 HASHMAPS00:15
      • 3.20 Features of Hashmaps00:56
      • 3.21 Hashmap Constructors01:36
      • 3.22 Hashmap Methods00:58
      • 3.23 Hashmap Insertion00:44
      • 3.24 HASHTABLE CLASS00:21
      • 3.25 Hashtable Class an Constructors01:25
      • 3.26 Hashtable Methods00:41
      • 3.27 Hashtable Methods00:48
      • 3.28 Hashtable Insertion and Display00:29
      • 3.29 Hashtable Insertion and Display (contd.)00:22
      • 3.30 EXCEPTIONS00:22
      • 3.31 Exception Handling01:06
      • 3.32 Exception Classes00:26
      • 3.33 User-Defined Exceptions01:04
      • 3.34 Types of Exceptions00:44
      • 3.35 Exception Handling Mechanisms00:54
      • 3.36 Try-Catch Block00:15
      • 3.37 Multiple Catch Blocks00:40
      • 3.38 Throw Statement00:33
      • 3.39 Throw Statement (contd.)00:25
      • 3.40 User-Defined Exceptions00:11
      • 3.41 Advantages of Using Exceptions00:25
      • 3.42 Error Handling and finally block00:30
      • 3.43 Summary00:41
      • 3.44 Thank You00:04
    • {{childObj.title}}
      • {{childObj.childSection.chapter_name}}
        • {{lesson.title}}
      • {{lesson.title}}

    View More

    View Less

Exam & certification FREE PRACTICE TEST

  • What do I need to do to unlock my Simplilearn certificate?

    Online Classroom:
    • You need to attend one complete batch.
    • Complete one project and one simulation test with a minimum score of 80%
    Online Self-Learning:
    • Complete 85% of the course.
    • Complete one project and one simulation test with a minimum score of 80%

Reviews

The course was amazing. The trainer had a very good knowledge about Hadoop and Spark. He answered our questions patiently and his demos were also very helpful. These online classes made it easier to understand the concepts of big data. Thank you Simplilearn!

Read more Read less

Interactive training, good pace. Technical concepts were made easier for me to understand (I don't have much technical background). Trainer is very sincere in helping us learn and grasp the lessons, appreciate it a lot.

Read more Read less

When I first thought about learning Big-Data and Hadoop, I was not sure where to start because there is so much information over internet which confuses you and takes you in various directions. Simplilearn helped me to stay on track, and guided me in the right direction. The trainer, especially, is really good. Thanks for this wonderful training. It’s true ROI!

Read more Read less

The study material is appropriately well organized, easy to follow...I am really impressed with the online teaching methodology followed during the training.

Read more Read less

Very good introduction to Big data Hadoop. Clearly organized and even a non-technical person can go through the course in a very organized manner.

Read more Read less

Simplilearn is an excellent online platform for online trainings with flexible hours of training and well-planned course content with great depth and case studies. The most interesting part which differentiates Simplilearn from other online vendors is the quality of the customer service - 24 / 7. I would strongly recommend Simplilearn to all who are looking for a change in career. Enroll for the courses and get the experience to become a professional.

Read more Read less

I really like the content of the course and the way trainer relates it with real-life examples.

Dedication of the trainer towards answering each & every question of the trainees makes us feel great and the online session as real as a classroom session.

Read more Read less

The trainer was knowledgeable and patient in explaining things. Many things were significantly easier to grasp with a live interactive instructor. I also like that he went out of his way to send additional information and solutions after the class via email.

Read more Read less

Very knowledgeable trainer, appreciate the time slot as well… Loved everything so far. I am very excited…

Great approach for the core understanding of Hadoop. Concepts are repeated from different points of view, responding to audience. At the end of the class you understand it.

Read more Read less

The course is very informative and interactive and that is the best part of this training.

Very informative and active sessions. Trainer is easy going and very interactive.

The content is well designed and the instructor was excellent.

The trainer really went the extra mile to help me work along. Thanks

Course advisor

Sina Jamshidi Big Data Lead at Bell Labs

Sina has over 10 years of experience in the technology field as a Big Data Architect at Bell Labs and a Platinum level trainer. Sina is a very passionate about building a Big Data education ecosystem and has been a contributor in a number of public and journal publications.

FAQs

  • What are the System Requirements?

    The tools you’ll need to attend training are:
    • Windows: Windows XP SP3 or higher
    • Mac: OSX 10.6 or higher
    • Internet speed: Preferably 512 Kbps or higher
    • Headset, speakers and microphone: You’ll need headphones or speakers to hear instruction clearly, as well as a microphone to talk to others. You can use a headset with a built-in microphone, or separate speakers and microphone.

  • Who are the trainers?

    The trainings are delivered by highly qualified and certified instructors with relevant industry experience.

  • What are the modes of training offered for this course?

    We offer this training in the following modes:

    • Live Virtual Classroom or Online Classroom: Attend the course remotely from your desktop via video conferencing to increase productivity and reduce the time spent away from work or home.

    • Online Self-Learning: In this mode, you will access the video training and go through the course at your own convenience.

  • Can I cancel my enrolment? Do I get a refund?

    Yes, you can cancel your enrolment if necessary. We will refund the course price after deducting an administration fee. To learn more, you can view our Refund Policy.

  • Are there any group discounts for classroom training programs?

    Yes, we have group discount options for our training programs. Contact us using the form on the right of any page on the Simplilearn website, or select the Live Chat link. Our customer service representatives can provide more details.

  • What payment options are available?

    Payments can be made using any of the following options. You will be emailed a receipt after the payment is made.
    • Visa Credit or Debit Card
    • MasterCard
    • American Express
    • Diner’s Club
    • PayPal

  • I’d like to learn more about this training program. Whom should I contact?

    Contact us using the form on the right of any page on the Simplilearn website, or select the Live Chat link. Our customer service representatives will be able to give you more details.

  • Who are our faculties and how are they selected?

    All of our highly qualified trainers are industry experts with at least 10-12 years of relevant teaching experience in Big Data Hadoop. Each of them has gone through a rigorous selection process which includes profile screening, technical evaluation, and a training demo before they are certified to train for us. We also ensure that only those trainers with a high alumni rating continue to train for us.

  • What is Global Teaching Assistance?

    Our teaching assistants are a dedicated team of subject matter experts here to help you get certified in your first attempt. They engage students proactively to ensure the course path is being followed and help you enrich your learning experience, from class onboarding to project mentoring and job assistance. Teaching Assistance is available during business hours for this Big Data Hadoop training course.

  • What is covered under the 24/7 Support promise?

    We offer 24/7 support through email, chat, and calls. We also have a dedicated team that provides on-demand assistance through our community forum. What’s more, you will have lifetime access to the community forum, even after completion of your course with us to discuss Big Data and Hadoop topics.

  • Disclaimer
  • PMP, PMI, PMBOK, CAPM, PgMP, PfMP, ACP, PBA, RMP, SP, and OPM3 are registered marks of the Project Management Institute, Inc.