Certified Big-Data and Hadoop Developer Training in San Antonio, Texas (TX)

  • 4.1
  • 3,138 Learners
  • Classroom Training

Introduction Video

View Course Introduction

Key Features

About the course

What is the course all about?
Simplilearn’s training in Big Data & Hadoop is an ideal training package for every aspiring professional who wants to make his/her career in Big Data Analytics using Hadoop Framework. The course equips participants to work on the Hadoop environment with ease and learn vital components such as Flume, Apache Oozie workflow scheduler and other advance concepts like Hadoop 2.0: Hbase, Zookeeper and Sqoop.

Why is the certification most sought-after?

As the Big data buzz is getting louder with Volume, Variety and Velocity, certified hadoopers equipped with the right skills to process the Big data through Hadoop are the ‘most wanted’ in Fortune 500 companies worldwide. This has greatly increased the career scope for certified hadoopers in comparison to their non-certified peers. Below are well known facts as to why one should opt for the Big Data & Hadoop Certification:
  •  According to Gartner – “Big Data & Analytics is one of the top 10 strategic technologies for businesses and there would be 4.4 Million Big Data jobs by 2015”.
  • Top companies like Microsoft, Software AG, IBM, Oracle, HP, SAP, EMC2 and Dell have invested a huge $15 billion on data management and analytics.
  • According to IDC – “Big Data market would grow up to $16.1 billion”.
  • According to Indeed.com – “Certified Big Data analysts start earning $117,000 in comparison to their non-certified peers”.
  • According to Robert Half Technology “Big Data, Big Pay – Average salary can reach up to $154,250”.
What learning benefits do you get from Simplilearn’s training?
At the end of Simplilearn’s training in Big Data & hadoop, participants will be able to:
  • Master the concepts of Hadoop framework and its deployment in a cluster environment
  • Learn to write complex MapReduce programs in both MRv1 & MRv2 (Yarn)
  • Learn high-level scripting frameworks Pig & Hive and perform data analytics using high level scripting language under Pig & Hive
  • Have a good understanding of Ecosystem and its advance components like Flume, Apache Oozie workflow scheduler etc.
  • Understand advance concepts Hadoop 2.0 : Hbase, Zookeeper and Sqoop
  • Get hands-on experience in different configurations of Hadoop cluster, its optimization & troubleshooting
  • Understand Hadoop Architecture by understanding Hadoop Distribution File System operations principles (vHDFS 1.0 & vHDFS 2.0)
  • Understand advance concepts of parallel processing in MapReduce1.0 (or MRv1) & MapReduce2.0 (or MRv2)
  • Process Big Data sets (around 3.5 Billion Data Points covering in 5 Projects) with high efficiency and can derive a logical conclusion, which is helpful in live industrial Scenarios.
What are the projects covered and their benefits?
The exceptionality of Simplilearn’s Big Data & Hadoop training is the opportunity for participants to work on 5 live industry based projects spanning across 11 Unique Data Sets and covering around 3.5 Bn Data points. This adds immense domain knowledge and real life industry experience to participant’s curriculum vitae.
 
Project 1: Analyzing a series of Data sets for a US-based customer to arrive at a prudent product mix, product positioning and marketing strategy.

Through this project, participants will get a chance to work as a Hadoop developer and the responsibility for completing all the subprojects within the defined timeframe.

Project Scenario:
Your company has recently bagged a large assignment from a US-based customer that is into training and development. The larger outcome, deals with launching a suite of educational and skill development programs to consumers across the globe. As part of the project, the customer wants your company to analyze a series of data sets to arrive at a prudent product mix, product positioning, and marketing strategy that will be applicable for at least a decade.
 
The whole project is divided into 7 subprojects, each involving its own data set.
 
Subproject 1: Identify motivators for continuous adult education.
Subproject 2: Identify occupations poised for growth & decline over the next 10 years.
Subproject 3: Identify regions that have potential for growth across potential industries.
Subproject 4: Categorize financial capacity of consumers across regions and demographics.
Subproject 5: Identify major gender and geographic attributes for education.
Subproject 6: Analyze the education expenditure and related parameters across the globe.
Subproject 7: Analyze the strength of the financial sector in target markets and participation of the population in the financial sectors.

Project 2: Analyze and perform page ranking for twitter data set

Project Scenario:
As a Hadoop developer, your task is to perform page ranking of the twitter data based on the dataset provided.

Project 3: Analyze Monthly retail report for the US Market - Retail Industry

Project Scenario:
A US ‐ based online retailer wants to launch a new product category and wants to understand the potential growth areas and areas that have stagnated over a period of time. It wants to use this information to ensure its product focus is aligned to opportunities that will grow over the next 5–7 years.  The customer has been provided data set that they can use.

Project 4: Analyze Mobile connectivity report for the UK Market - Telecom Industry

Project Scenario:
A UK ‐ based customer wants to launch 3G devices in regions where their penetration is low and you have been allocated the task of performing this analysis using Hadoop. The customer has been provided data set that they can use.

Project 5: Analyze health reports across years for the US MarketInsurance Industry

Project Scenario:
A US ‐ based insurance provider as decided to launch a new medical insurance program targeting various customers. To help this customer understand the current realities and the market better, you have to perform a series of data analytics tasks using Hadoop. The customer has been provided data set that they can use.

What are the career benefits in-store for you?
  • The certification makes you ride the Big data wave, enhances your analytics skills and helps you to land in job roles like Data Scientist,  Hadoop Developer, Hadoop Architect, Hadoop Tester.
  • Top companies like Microsoft, Software AG, IBM, Oracle, HP, SAP, EMC2 and Dell have invested a huge $15 billion on data management and analytics, thereby increasing the number of opportunities for Big data & Hadoop certified professionals.
  • Certified analysts earn $117,000 in comparison to their non-certified peers.
  • Certified Big data professionals with Hands-on exposure to industry relevant tools have a growing career graph.
How do I become Certified Big Data & Hadoop Developer?
Participants get certified in Big Data & Hadoop by:
  1. Completing any one project out of the five projects given by Simplilearn.  The outcome of the project should be verified by the lead trainer and the candidate is evaluated thereafter. Necessary screenshots of the outputs of the project should be mailed to support@simplilearn.com
  2. Clearing the online examination with a minimum score of 80%.
Note: It is mandatory that a participant fulfills both the criteria i.e. completion of 1 Project and clearing the online exam with minimum score of 80% to become Certified Big Data & Hadoop Developer.

Who should do this course?
Analytics professionals, IT professionals, ETL developers, Project managers, testing novices and experts will find the course ideal. Any professional or a fresh graduate interested to enter the Big Data analytics field can also pursue this certification.

What qualifications do you need?
Fundamental programming skills is a minimum requirement while working knowledge of Java would be an added advantage.

Course Preview

  • Big Data Hadoop Developer
      Course Introduction Lesson 00 - Course Introduction Lesson 01 - Introduction to Big Data and Hadoop
      • 1.1 Introduction to Big Data and Hadoop    
      • 1.2 Objectives    
      • 1.3 Data Explosion     1:00
      • 1.4 Types of Data     1:00
      • 1.5 Need for Big Data     1:00
      • 1.6 Data The Most Valuable Resource     1:00
      • 1.7 Big Data and Its Sources     1:00
      • 1.8 Three Characteristics of Big Data    
      • 1.9 Characteristics of Big Data Technology     2:00
      • 1.10 Appeal of Big Data Technology     1:00
      • 1.11 Leveraging Multiple Sources of Data    
      • 1.12 Traditional IT Analytics Approach     1:00
      • 1.13 Big Data Technology Platform for Discovery and Exploration     1:00
      • 1.14 Big Data Technology Capabilities    
      • 1.15 Big Data Use Cases     1:00
      • 1.16 Handling Limitations of Big Data     1:00
      • 1.17 Introduction to Hadoop     1:00
      • 1.18 History and Milestones of Hadoop     1:00
      • 1.19 Organizations Using Hadoop    
      • 1.20 Quiz    
      • 1.21 Summary     1:00
      • 1.22 Thank You    
      Lesson 02 - Getting Started with Hadoop update
      • 2.1 Getting Started with Hadoop    
      • 2.2 Objectives    
      • 2.3 VMware Player Introduction     1:00
      • 2.4 VMware Player Hardware Requirements     1:00
      • 2.5 Steps to Install VMware Player     1:00
      • 2.6 Install VMware Player Step1    
      • 2.7 Install VMware Player Step2    
      • 2.8 Install VMware Player Step3    
      • 2.9 Install VMware Player Step4    
      • 2.10 Install VMware Player Step5    
      • 2.11 Install VMware Player Step6    
      • 2.12 Install VMware Player Step7    
      • 2.13 Install VMware Player Step8    
      • 2.14 Install VMware Player Step9     1:00
      • 2.15 Steps to Create a VM in VMware Player.     1:00
      • 2.16 Create a VM in a VMware Player Step1    
      • 2.17 Create a VM in a VMware Player Step2    
      • 2.18 Create a VM in a VMware Player Step3    
      • 2.19 Create a VM in a VMware Player Step4    
      • 2.20 Create a VM in a VMware Player Step5    
      • 2.21 Create a VM in a VMware Player Step6    
      • 2.22 Open a VM in VMware Player Step1    
      • 2.23 Open a VM in VMware Player Step2    
      • 2.24 Oracle VirtualBox to Open a VM     1:00
      • 2.25 Open a VM using OracleVirtualBox Step1    
      • 2.26 Open a VM using OracleVirtualBox Step2    
      • 2.27 Open a VM using Oracle VirtualBox Step3    
      • 2.28 Open a VM using Oracle VirtualBox Step4    
      • 2.29 Business Scenario     1:00
      • 2.30 Demo    
      • 2.31 DemoSummary    
      • 2.32 Summary     1:00
      • 2.33 Thank You    
      Lesson 03 - Hadoop Architecture
      • 3.1 Hadoop Architecture    
      • 3.2 Objectives    
      • 3.3 Key Terms     1:00
      • 3.4 Hadoop Cluster Using Commodity Hardware     1:00
      • 3.5 Hadoop Configuration    
      • 3.6 Hadoop Core Services     1:00
      • 3.7 Apache Hadoop Core Components    
      • 3.8 Hadoop Core Components HDFS     1:00
      • 3.9 Hadoop Core Components MapReduce     1:00
      • 3.10 Regular File System vs. HDFS     1:00
      • 3.11 HDFS Characteristics     2:00
      • 3.12 HDFS Key Features     1:00
      • 3.13 HDFS Architecture     1:00
      • 3.14 HDFS Operation Principle     2:00
      • 3.15 HDFS     1:00
      • 3.16 File System Namespace     1:00
      • 3.17 NameNode Operation     2:00
      • 3.18 Data Block Split     1:00
      • 3.19 Benefits of Data Block Approach    
      • 3.20 HDFS Block Replication Architecture     1:00
      • 3.21 Replication Method     1:00
      • 3.22 Data Replication Topology    
      • 3.23 Data Replication Representation     1:00
      • 3.24 HDFS Access     1:00
      • 3.25 Business Scenario    
      • 3.26 Demo    
      • 3.27 DemoSummary    
      • 3.28 Quiz    
      • 3.29 Summary     1:00
      • 3.30 Thank You    
      Lesson 04 - Hadoop Deployment
      • 4.1 Hadoop Deployment    
      • 4.2 Objectives    
      • 4.3 Ubuntu Server Introduction     1:00
      • 4.4 Installation of Ubuntu Server 12.4     1:00
      • 4.5 Business Scenario     1:00
      • 4.6 Demo1    
      • 4.7 Demo Summary     1:00
      • 4.8 Hadoop Installation Prerequisites    
      • 4.9 Hadoop Installation     2:00
      • 4.10 Hadoop Installation Step1     1:00
      • 4.11 Hadoop Installation Step2     1:00
      • 4.12 Hadoop Installation Step3    
      • 4.13 Hadoop Installation Step4    
      • 4.14 Hadoop Installation Step5    
      • 4.15 Hadoop Installation Step6    
      • 4.16 Hadoop Installation Step7    
      • 4.17 Hadoop Installation Step7(contd.)    
      • 4.18 Hadoop Installation Step8    
      • 4.19 Hadoop Installation Step8(contd.)    
      • 4.20 Hadoop Installation Step8(contd.)    
      • 4.21 Hadoop Installation Step9    
      • 4.22 Hadoop Installation Step9(contd.)     1:00
      • 4.23 Hadoop Installation Step10    
      • 4.24 Hadoop Installation Step10(contd.)    
      • 4.25 Hadoop Installation Step11     1:00
      • 4.26 Hadoop nstallation Step12     1:00
      • 4.27 Hadoop Installation Step12(contd.)     1:00
      • 4.28 Demo2    
      • 4.29 Demo Summary     1:00
      • 4.30 Hadoop Multi Node Installation Prerequisites    
      • 4.31 Steps for Hadoop Multi Node Installation     1:00
      • 4.32 Hadoop Multi-Node Installation Steps1 and 2    
      • 4.33 Hadoop Multi-Node Installation Step3    
      • 4.34 Hadoop Multi Node Installation Step3(contd.)     1:00
      • 4.35 Hadoop Multi Node Installation Step4     1:00
      • 4.36 Hadoop Multi Node Installation Step4(contd.)    
      • 4.37 Hadoop Multi Node Installation Step4(contd.)    
      • 4.38 Single Node Cluster vs Multi Node Cluster     1:00
      • 4.39 Demo3    
      • 4.40 Demo Summary    
      • 4.41 Demo4    
      • 4.42 Demo Summary     2:00
      • 4.43 Demo5    
      • 4.44 Demo Summary     2:00
      • 4.45 Quiz    
      • 4.46 Summary     1:00
      • 4.47 Thank You    
      Lesson 05 - Introduction to MapReduce
      • 5.1 Introduction to MapReduce    
      • 5.2 Objectives    
      • 5.3 MapReduce Introduction     1:00
      • 5.4 MapReduce Analogy     1:00
      • 5.5 MapReduce Analogy(contd.)     1:00
      • 5.6 MapReduce Example     2:00
      • 5.7 Map Execution    
      • 5.8 Map Execution Distributed Two Node Environment     1:00
      • 5.9 MapReduce Essentials     1:00
      • 5.10 MapReduce Jobs     1:00
      • 5.11 MapReduce Engine     1:00
      • 5.12 MapReduce and Associated Tasks     1:00
      • 5.13 MapReduce Association with HDFS     1:00
      • 5.14 Hadoop Job Work Interaction    
      • 5.15 Characteristics of MapReduce     1:00
      • 5.16 Real time Uses of MapReduce     1:00
      • 5.17 Prerequisites for Hadoop Installation in Ubuntu Desktop 12.4    
      • 5.18 Steps to Install Hadoop     1:00
      • 5.19 Business Scenario     1:00
      • 5.20 Set up Environment for MapReduce Development    
      • 5.21 Small Data and Big Data     1:00
      • 5.22 Uploading Small Data and Big Data     1:00
      • 5.23 Demo1    
      • 5.24 Demo Summary    
      • 5.25 Build MapReduce Program     1:00
      • 5.26 Hadoop MapReduce Requirements     1:00
      • 5.27 Hadoop MapReduce Features     1:00
      • 5.28 Hadoop MapReduce Processes     1:00
      • 5.29 Steps of Hadoop MapReduce     1:00
      • 5.30 MapReduce Responsibilities     1:00
      • 5.31 MapReduce Java Programming in Eclipse    
      • 5.32 Create a New Project Step1    
      • 5.33 Create a New Project Step2    
      • 5.34 Create a New Project Step3    
      • 5.35 Create a New Project Step4    
      • 5.36 Create a New Project Step5    
      • 5.37 Demo2    
      • 5.38 Demo Summary    
      • 5.39 Demo3    
      • 5.40 Demo Summary    
      • 5.41 Checking Hadoop Environment for MapReduce     1:00
      • 5.42 Demo4    
      • 5.43 Demo Summary    
      • 5.44 Demo5    
      • 5.45 Demo Summary     1:00
      • 5.46 Demo6    
      • 5.47 Demo Summary     1:00
      • 5.48 MapReduce v 2.    
      • 5.49 Quiz    
      • 5.50 Summary     1:00
      • 5.51 Thank You    
      Lesson 06 - Advanced HDFS and MapReduce
      • 6.1 Advanced HDFS and MapReduce    
      • 6.2 Objectives    
      • 6.3 Advanced HDFS Introduction     1:00
      • 6.4 HDFS Benchmarking     1:00
      • 6.5 HDFS Benchmarking (contd.)    
      • 6.6 Setting Up HDFS Block Size     1:00
      • 6.7 Setting Up HDFS Block Size Step 1    
      • 6.8 Setting Up HDFS Block Size Step 2     1:00
      • 6.9 Decommissioning a DataNode     1:00
      • 6.10 Decommissioning a DataNode Step 1    
      • 6.11 Decommissioning a DataNode Step 2    
      • 6.12 Decommissioning a DataNode Step 3 and 4    
      • 6.13 Business Scenario     1:00
      • 6.14 Demo1    
      • 6.15 Demo summary     1:00
      • 6.16 Advanced MapReduce     1:00
      • 6.17 Interfaces    
      • 6.18 Data Types in Hadoop     1:00
      • 6.19 InputFormats in MapReduce     1:00
      • 6.20 OutputFormats in MapReduce     2:00
      • 6.21 Distributed Cache     1:00
      • 6.22 Using Distributed Cache Step 1    
      • 6.23 Using Distributed Cache Step 2    
      • 6.24 Using Distributed Cache Step 3    
      • 6.25 Joins in MapReduce    
      • 6.26 Reduce Side Join     1:00
      • 6.27 Reduce Side Join(contd.)     1:00
      • 6.28 Replicated Join     1:00
      • 6.29 Replicated Join(contd.)     1:00
      • 6.30 Composite Join     1:00
      • 6.31 Composite Join(contd.)    
      • 6.32 Cartesian Product     1:00
      • 6.33 Cartesian Product(contd.)    
      • 6.34 Demo2    
      • 6.35 Demo summary     1:00
      • 6.36 Quiz    
      • 6.37 Summary     1:00
      • 6.38 Thank You    
      Lesson 07 - Pig
      • 7.1 Pig    
      • 7.2 Objectives    
      • 7.3 Challenges Of Mapreduce Development Using Java     1:00
      • 7.4 Introduction To Pig     1:00
      • 7.5 Components Of Pig     1:00
      • 7.6 How Pig Works     1:00
      • 7.7 Data Model     1:00
      • 7.8 Data Model (contd.)     2:00
      • 7.9 Nested Data Model    
      • 7.10 Pig Execution Modes    
      • 7.11 Pig Interactive Modes     1:00
      • 7.12 Salient Features     1:00
      • 7.13 Pig vs SQL     1:00
      • 7.14 Pig Vs Sql Example     1:00
      • 7.15 Installing Pig Engine    
      • 7.16 Steps To Installing Pig Engine     1:00
      • 7.17 Installing Pig Engine Step1    
      • 7.18 Installing Pi Engine Step2     1:00
      • 7.19 Installing Pig Engine Step3    
      • 7.20 Installing Pig Engine Step4    
      • 7.21 Installing Pig Engine Step5    
      • 7.22 Run A Sample Program To Test Pig     1:00
      • 7.23 Getting Datasets For Pig Development    
      • 7.24 Prerequisites To Set The Environment For Pig Latin    
      • 7.25 Prerequisites To Set The Environment For Pig Latin Step1    
      • 7.26 Prerequisites To Set The Environment For Pig Latin Step2    
      • 7.27 Prerequisites To Set The Environment For Pig Latin Step3    
      • 7.28 Loading And Storing Methods Step 1-21     1:00
      • 7.29 Loading and Storing Methods Step2    
      • 7.30 Script Interpretation     1:00
      • 7.31 Filtering and Transforming    
      • 7.32 Grouping and Sorting    
      • 7.33 Combining and Splitting    
      • 7.34 Pig Commands     1:00
      • 7.35 Business Scenario     1:00
      • 7.36 Demo1    
      • 7.37 Demo Summary    
      • 7.38 Demo2    
      • 7.39 Demo Summary    
      • 7.40 Demo3    
      • 7.41 Demo Summary    
      • 7.42 Demo4    
      • 7.43 Demo Summary    
      • 7.44 Demo5    
      • 7.45 Demo Summary     1:00
      • 7.46 Demo6    
      • 7.47 Demo Summary    
      • 7.48 Quiz    
      • 7.49 Summary     1:00
      • 7.50 Thank You    
      Lesson 08 - Hive
      • 8.1 Hive    
      • 8.2 Objectives    
      • 8.3 Need for Additional Data Warehousing System     1:00
      • 8.4 Hive Introduction     1:00
      • 8.5 Hive Characteristics     1:00
      • 8.6 System Architecture and Components of Hive    
      • 8.7 Metastore    
      • 8.8 Metastore Configuration    
      • 8.9 Driver    
      • 8.10 Query Compiler    
      • 8.11 Query Optimizer     1:00
      • 8.12 Execution Engine    
      • 8.13 Hive Server     1:00
      • 8.14 Client Components     1:00
      • 8.15 Basics of The Hive Query Language     1:00
      • 8.16 Data Model Tables     1:00
      • 8.17 Data Model External Tables     1:00
      • 8.18 Data Types in Hive    
      • 8.19 Data Model Partitions     1:00
      • 8.20 Serialization and Deserialization     1:00
      • 8.21 Hive File Formats     1:00
      • 8.22 Hive Query Language Select    
      • 8.23 Hive Query Language JOIN and INSERT    
      • 8.24 Hive Installation Step1    
      • 8.25 Hive Installation Step2    
      • 8.26 Hive Installation Step3    
      • 8.27 Hive Installation Step4    
      • 8.28 Running Hive    
      • 8.29 Programming in Hive    
      • 8.30 Programming in Hive(contd.)    
      • 8.31 Programming in Hive(contd.)     1:00
      • 8.32 Programming in Hive(contd.)    
      • 8.33 Programming in Hive(contd.)    
      • 8.34 Programming in Hive(contd.)    
      • 8.35 Programming in Hive(contd.)    
      • 8.36 Programming in Hive(contd.)    
      • 8.37 Programming in Hive(contd.)    
      • 8.38 Hive Query Language Extensibility    
      • 8.39 User Defined Function     1:00
      • 8.40 Built-In Functions    
      • 8.41 Other Functions in Hive     1:00
      • 8.42 MapReduce Scripts     1:00
      • 8.43 UDFUDAF vs MapReduce Scripts    
      • 8.44 Business Scenario     1:00
      • 8.45 Demo1    
      • 8.46 Demo Summary    
      • 8.47 Demo2    
      • 8.48 Demo Summary    
      • 8.49 Demo3    
      • 8.50 Demo Summary    
      • 8.51 Demo4    
      • 8.52 Demo Summary    
      • 8.53 Quiz    
      • 8.54 Summary     1:00
      • 8.55 Thank You    
      Lesson 09 - HBase
      • 9.1 Big Data and Hadoop Developer    
      • 9.2 Objectives    
      • 9.3 HBase Introduction     1:00
      • 9.4 Characteristics of HBase     1:00
      • 9.5 Companies Using HBase    
      • 9.6 HBase Architecture     1:00
      • 9.7 HBase Architecture (contd.)     1:00
      • 9.8 Storage Model of HBase     1:00
      • 9.9 Row Distribution of Data between RegionServers    
      • 9.10 Data Storage in HBase     1:00
      • 9.11 Data Model     1:00
      • 9.12 When to Use HBase     1:00
      • 9.13 HBase vs. RDBMS     1:00
      • 9.14 Installation of HBase     1:00
      • 9.15 Installation of HBase Step 1    
      • 9.16 Installation of HBase Steps 2 and 3    
      • 9.17 Installation of HBase Steps 4 and 5    
      • 9.18 Installation of HBase Steps 6 and 7    
      • 9.19 Installation of HBase Step 8    
      • 9.20 Configuration of HBase    
      • 9.21 Configuration of HBase Step 1    
      • 9.22 Configuration of HBase Step 2    
      • 9.23 Configuration of HBase Steps 3 and 4    
      • 9.24 Business Scenario    
      • 9.25 Demo    
      • 9.26 Demo Summary     1:00
      • 9.27 Connecting to HBase     1:00
      • 9.28 HBase Shell Commands    
      • 9.29 HBase Shell Commands (contd.)    
      • 9.30 Quiz    
      • 9.31 Summary     1:00
      • 9.32 Thank you    
      Lesson 10 - Commercial Distribution of Hadoop
      • 10.1 Commercial Distribution of Hadoop    
      • 10.2 Objectives    
      • 10.3 Cloudera Introduction     1:00
      • 10.4 Cloudera CDH     1:00
      • 10.5 Downloading the Cloudera QuickStart Virtual Machine    
      • 10.6 Starting the Cloudera VM     1:00
      • 10.7 Starting the Cloudera VM Steps 1 and 2    
      • 10.8 Starting the Cloudera VM Steps 3 and 4    
      • 10.9 Starting the Cloudera VM Step 5    
      • 10.10 Starting the Cloudera VM Step 6    
      • 10.11 Logging into Hue    
      • 10.12 Logging into Hue(contd.)    
      • 10.13 Logging into Hue(contd.)    
      • 10.14 Cloudera Manager     1:00
      • 10.15 Logging Into Cloudera Manager    
      • 10.16 Business Scenario     1:00
      • 10.17 Demo1    
      • 10.18 Demo summary    
      • 10.19 Demo2    
      • 10.20 Demo summary     1:00
      • 10.21 Hortonworks Data Platform     1:00
      • 10.22 MapR Data Platform     1:00
      • 10.23 Pivotal HD     1:00
      • 10.24 IBM InfoSphere BigInsights     1:00
      • 10.25 IBM InfoSphere BigInsights(contd.)    
      • 10.26 Quiz    
      • 10.27 Summary     1:00
      • 10.28 Thank You    
      Lesson 11 - ZooKeeper Sqoop and Flume
      • 11.1 ZooKeeper Sqoop and Flume    
      • 11.2 Objectives     1:00
      • 11.3 Introduction to ZooKeeper    
      • 11.4 Features of ZooKeeper     1:00
      • 11.5 Challenges Faced in Distributed Applications     1:00
      • 11.6 Coordination     1:00
      • 11.7 Goals of ZooKeeper     1:00
      • 11.8 Uses of ZooKeeper    
      • 11.9 ZooKeeper Entities     1:00
      • 11.10 ZooKeeper Data Model     1:00
      • 11.11 ZooKeeper Services    
      • 11.12 ZooKeeper Services(contd.)     1:00
      • 11.13 Client API Functions     1:00
      • 11.14 Recipe 1 Cluster Management     1:00
      • 11.15 Recipe 2 Leader Election     1:00
      • 11.16 Recipe 3 Distributed Exclusive Lock     1:00
      • 11.17 Business Scenario     1:00
      • 11.18 Demo1    
      • 11.19 Demo summary    
      • 11.20 Why Sqoop     1:00
      • 11.21 Why Sqoop(contd.)     1:00
      • 11.22 Benefits of Sqoop     1:00
      • 11.23 Sqoop Processing     1:00
      • 11.24 Sqoop Under the Hood     1:00
      • 11.25 Importing Data Using Sqoop    
      • 11.26 Sqoop Import Process    
      • 11.27 Sqoop Import Process(contd.)     1:00
      • 11.28 Importing Data to Hive    
      • 11.29 Importing Data to HBase     1:00
      • 11.30 Importing Data to HBase(contd.)    
      • 11.31 Exporting Data from Hadoop Using Sqoop    
      • 11.32 Exporting Data from Hadoop Using Sqoop(contd.)     1:00
      • 11.33 Sqoop Connectors     1:00
      • 11.34 Sample Sqoop Commands     1:00
      • 11.35 Business Scenario     1:00
      • 11.36 Demo2    
      • 11.37 Demo summary     1:00
      • 11.38 Demo3    
      • 11.39 Demo summary     1:00
      • 11.40 Why Flume     1:00
      • 11.41 Apache Flume Introduction     1:00
      • 11.42 Flume Model     1:00
      • 11.43 Flume Goals     1:00
      • 11.44 Scalability in Flume     1:00
      • 11.45 Flume Sample Use Cases     1:00
      • 11.46 Business Scenario    
      • 11.47 Demo4    
      • 11.48 Demo summary     1:00
      • 11.49 Quiz    
      • 11.50 Summary     1:00
      • 11.51 Thank You    
      Lesson 12 - Ecosystem and its Components
      • 12.1 Ecosystem and Its Components    
      • 12.2 Objectives    
      • 12.3 Apache Hadoop Ecosystem    
      • 12.4 Apache Oozie     1:00
      • 12.5 Apache Oozie Workflow     1:00
      • 12.6 Apache Oozie Workflow (contd.)     1:00
      • 12.7 Introduction to Mahout    
      • 12.8 Why Mahout     1:00
      • 12.9 Features of Mahout     1:00
      • 12.10 Usage of Mahout    
      • 12.11 Usage of Mahout (contd.)     1:00
      • 12.12 Apache Cassandra     1:00
      • 12.13 Why Apache Cassandra     1:00
      • 12.14 Apache Spark     1:00
      • 12.15 Apache Spark Tools     1:00
      • 12.16 Key Concepts Related to Apache Spark     1:00
      • 12.17 Apache Spark Example    
      • 12.18 Hadoop Integration     1:00
      • 12.19 Quiz    
      • 12.20 Summary     1:00
      • 12.21 Thank You    
      Lesson 13 - Hadoop Administration, Troubleshooting, and Security
      • 13.1 Hadoop Administration Troubleshooting and Security    
      • 13.2 Objectives    
      • 13.3 Typical Hadoop Core Cluster     1:00
      • 13.4 Load Balancer     1:00
      • 13.5 Commands Used in Hadoop Programming     1:00
      • 13.6 Different Configuration Files of Hadoop Cluster     1:00
      • 13.7 Properties of hadoop default.xml     1:00
      • 13.8 Different Configurations for Hadoop Cluster     1:00
      • 13.9 Different Configurations for Hadoop Cluster (contd.)     2:00
      • 13.10 Port Numbers for Individual Hadoop Services     2:00
      • 13.11 Performance Monitoring     1:00
      • 13.12 Performance Tuning    
      • 13.13 Parameters of Performance Tuning     1:00
      • 13.14 Troubleshooting and Log Observation     1:00
      • 13.15 Apache Ambari     1:00
      • 13.16 Key Features of Apache Ambari     1:00
      • 13.17 Business Scenario     1:00
      • 13.18 Demo1    
      • 13.19 Demo Summary    
      • 13.20 Demo2    
      • 13.21 Demo Summary     1:00
      • 13.22 Hadoop Security Kerberos     1:00
      • 13.23 Kerberos Authentication Mechanism    
      • 13.24 Kerberos Configuration     1:00
      • 13.25 Data Confidentiality     1:00
      • 13.26 Quiz    
      • 13.27 Summary     1:00
      • 13.28 Thank You    
      Course Summary
      • Course Summary     6:00
  • Java Essentials for Hadoop
      Lesson 01 - Essentials of Java for Hadoop
      • 1.1 Essentials of Java for Hadoop   Complete view
      • 1.2 Lesson Objectives   Complete view
      • 1.3 Java Definition   Complete view
      • 1.4 Java Virtual Machine (JVM)     1:00
      • 1.5 Working of Java     1:00
      • 1.6 Running a Basic Java Program     1:00
      • 1.7 Running a Basic Java Program (contd.)     1:00
      • 1.8 Running a Basic Java Program in NetBeans IDE    
      • 1.9 BASIC JAVA SYNTAX    
      • 1.10 Data Types in Java    
      • 1.11 Variables in Java     2:00
      • 1.12 Naming Conventionsof Variables     1:00
      • 1.13 Type Casting.     1:00
      • 1.14 Operators     1:00
      • 1.15 Mathematical Operators    
      • 1.16 Unary Operators.    
      • 1.17 Relational Operators    
      • 1.18 Logical or Conditional Operators    
      • 1.19 Bitwise Operators     1:00
      • 1.20 Static Versus Non Static Variables     1:00
      • 1.21 Static Versus Non Static Variables (contd.)    
      • 1.22 Statements and Blocks of Code     1:00
      • 1.23 Flow Control     1:00
      • 1.24 If Statement     1:00
      • 1.25 Variants of if Statement     1:00
      • 1.26 Nested If Statement     1:00
      • 1.27 Switch Statement     1:00
      • 1.28 Switch Statement (contd.)     1:00
      • 1.29 Loop Statements     1:00
      • 1.30 Loop Statements (contd.)     1:00
      • 1.31 Break and Continue Statements     1:00
      • 1.32 Basic Java Constructs     1:00
      • 1.33 Arrays     1:00
      • 1.34 Arrays (contd.)     1:00
      • 1.35 JAVA CLASSES AND METHODS    
      • 1.36 Classes     1:00
      • 1.37 Objects     1:00
      • 1.38 Methods     1:00
      • 1.39 Access Modifiers     1:00
      • 1.40 Summary     1:00
      • 1.41 Thank You    
      Lesson 02 - Java Constructors
      • 2.1 Java Constructors    
      • 2.2 Objectives     1:00
      • 2.3 Features of Java     1:00
      • 2.4 Classes Objects and Constructors     1:00
      • 2.5 Constructors     1:00
      • 2.6 Constructor Overloading     1:00
      • 2.7 Constructor Overloading (contd.)    
      • 2.8 PACKAGES    
      • 2.9 Definition of Packages     1:00
      • 2.10 Advantages of Packages    
      • 2.11 Naming Conventions of Packages    
      • 2.12 INHERITANCE    
      • 2.13 Definition of Inheritance     1:00
      • 2.14 Multilevel Inheritance     1:00
      • 2.15 Hierarchical Inheritance    
      • 2.16 Method Overriding     1:00
      • 2.17 Method Overriding(contd.)     1:00
      • 2.18 Method Overriding(contd.)    
      • 2.19 ABSTRACT CLASSES    
      • 2.20 Definition of Abstract Classes     1:00
      • 2.21 Usage of Abstract Classes     1:00
      • 2.22 INTERFACES    
      • 2.23 Features of Interfaces     1:00
      • 2.24 Syntax for Creating Interfaces    
      • 2.25 Implementing an Interface    
      • 2.26 Implementing an Interface(contd.)    
      • 2.27 INPUT AND OUTPUT    
      • 2.28 Features of Input and Output     1:00
      • 2.29 System.in.read() Method    
      • 2.30 Reading Input from the Console     1:00
      • 2.31 Stream Objects    
      • 2.32 String Tokenizer Class     1:00
      • 2.33 Scanner Class     1:00
      • 2.34 Writing Output to the Console    
      • 2.35 Summary     1:00
      • 2.36 Thank You    
      Lesson 03 - Essential Classes and Exceptions in Java
      • 3.1 Essential Classes and Exceptions in Java    
      • 3.2 Objectives     1:00
      • 3.3 The Enums in Java     1:00
      • 3.4 Program Using Enum     1:00
      • 3.5 ArrayList     1:00
      • 3.6 ArrayList Constructors     1:00
      • 3.7 Methods of ArrayList     1:00
      • 3.8 ArrayList Insertion     1:00
      • 3.9 ArrayList Insertion (contd.)     1:00
      • 3.10 Iterator     1:00
      • 3.11 Iterator (contd.)     1:00
      • 3.12 ListIterator     1:00
      • 3.13 ListIterator (contd.)     1:00
      • 3.14 Displaying Items Using ListIterator     1:00
      • 3.15 For-Each Loop     1:00
      • 3.16 For-Each Loop (contd.)    
      • 3.17 Enumeration     1:00
      • 3.18 Enumeration (contd.)    
      • 3.19 HASHMAPS    
      • 3.20 Features of Hashmaps     1:00
      • 3.21 Hashmap Constructors     2:00
      • 3.22 Hashmap Methods     1:00
      • 3.23 Hashmap Insertion     1:00
      • 3.24 HASHTABLE CLASS    
      • 3.25 Hashtable Class an Constructors     1:00
      • 3.26 Hashtable Methods     1:00
      • 3.27 Hashtable Methods     1:00
      • 3.28 Hashtable Insertion and Display    
      • 3.29 Hashtable Insertion and Display (contd.)    
      • 3.30 EXCEPTIONS    
      • 3.31 Exception Handling     1:00
      • 3.32 Exception Classes    
      • 3.33 User-Defined Exceptions     1:00
      • 3.34 Types of Exceptions     1:00
      • 3.35 Exception Handling Mechanisms     1:00
      • 3.36 Try-Catch Block    
      • 3.37 Multiple Catch Blocks     1:00
      • 3.38 Throw Statement     1:00
      • 3.39 Throw Statement (contd.)    
      • 3.40 User-Defined Exceptions    
      • 3.41 Advantages of Using Exceptions    
      • 3.42 Error Handling and finally block     1:00
      • 3.43 Summary     1:00
      • 3.44 Thank You    

Reviews

Eligibility

Anyone who has knowledge on Java, basic UNIX and basic SQL can opt for the Big Data and Hadoop training course.

Why Simplilearn?

Why choose Simplilearn for your training?
Simplilearn’s Big Data & Hadoop training is the first of its kind providing comprehensive training and is ideal for professionals. It is the best in terms of time & money invested. We stand out because participants
  • Have the flexibility to choose from 3 different modes of learning
  • Get hands-on lab Exercises
  • Work on 5 Real life Industry based Projects covering 3.5 Bn Data Points spanned across 11 Data Sets. All Projects are based upon real life industrial scenario
 Take a look at how Simplilearn’s training stands above other training providers:
 
Simplilearn Other Training  Providers
160 Hrs of Total learning  (2x more as compared to any other Training Provider) 90 Hrs or less
60 Hrs of Real Time Industry Based Projects - 3.5 Bn Datapoints to work upon (3x more as compared to any other Training Provider) 20 Hrs or less
25 Hrs of High Quality e-Learning content Not Available
5 Hrs of Doubt Clarification Classes Not Available
Flexibility to choose training type from 3 available training modes Only single training mode
Chance to work on 5 Real Life industry based Projects 1 or No Project
Industry Specific Projects on Top 3 Sectors - Retail, Telecom & Insurance Not Available
11 Unique Data Sets to work upon 7 or less
Free 90 Days e-Learning access worth $252 with every Instructor led training Not Available
Free Java Essentials for Hadoop Not Available

In addition, Simplilearn’s training is the best because:
  • We are an accredited, approved and recognized training partner by globally renowned names like Project Management Institute of USA, APMG, CFA Institute, GARP, ASTQB, IIBA and more.
  • We have a network of over 2000 certified and experienced trainers
  • 40% of our participants receiving our training are from fortune 500 companies
  • Our exam pass rate is 98.6% and we guarantee passing the exam in  your first attempt
  • Our training packages have attractive discounts and money back guarantee.

FAQs

New to ?

Simplilearn is conducting 4 days Big-Data and Hadoop Developer certification training in San Antonio, delivered by certified and highly experienced trainers. We are one of the best Big-Data and Hadoop Developer Training institutes in San Antonio. This Big-Data and Hadoop Developer course includes interactive Big-Data ... and Hadoop Developer classes, and more.

Show Less

Most Popular Videos

Most Popular Articles

Most Popular Courses

About Big-Data and Hadoop Developer Certification Training

Location:
Sorry, no workshops are available in San Antonio. Please select another city.
click here for the recommended workshop

Note: All Texas residents have to be sponsored by an employer. Please provide the employer details after enrolment.

Not sure? Drop us a query
Name *
Email *
Your Query *
Phone number *

Venue

Holiday Inn San Antonio Market Square
  318 W. Cesar E. Chavez Blvd. (formerly Durango Blvd),
San Antonio, Texas 78204

Note: Indicative only. We will intimate the final confirmed venue by email 2 days prior to the date of training.

About San Antonio

San Antonio is a city in the state of Texas. People from all over the world visit this city every year. The main tourist attractions are: The Alamo, Boating, River Walk, Botanical Garden, Shopping centers. Economically, San Antonio is dependent on many sectors like financial, government, healthcare and tourism. Tourism sector is the largest contributor of employment in San Antonio. There are lots of Fortune 500 companies that have base in this city. Some of them are Kinetics Concepts, Frost National Bank, Harte Hanks and Taco Cabana. As its economy is diversified with companies ranging from manufacturing to service sector, it is useful for professionals in improving their project handling and customer service skills. Acquiring knowledge through certain courses like PMP and Six Sigma will be helpful for them to improve their productivity and customer satisfaction in San Antonio, TX, USA.

Online Courses Classroom Courses Practice Tests
Login