Course description

  • Why learn Apache Spark and Scala?

    The world is getting increasingly digital, and this means big data is here to stay. In fact, the importance of big data development using spark & scala programming and data analytics is going to continue growing in India over the coming years. Choosing a career in the field of big data development and analytics might just be the type of role that you have been trying to find to meet your career expectations in Bangalore. Professionals who are workingi.e. spark developers, Scala programmers, Hadoop developers and more in this field can expect an impressive salary, with the median salary for data scientists being $116,000. Even those who are at the entry level will find high salaries, with average earnings of $92,000. As more and more companies realize the need for specialists in big data development or Spark & Scala and analytics, the number of these jobs will continue to grow. Close to 80% of data scientists say there is currently a shortage of professionals working in the field.

     

  • What are the course objectives?

    Simplilearn’s Apache Spark and Scala certification training in Bangalore are designed to:
    • Advance your expertise in the Big Data Hadoop Ecosystem
    • Help you master essential Apache and Spark skills, such as Spark Streaming, Spark SQL, machine learning programming, GraphX programming and Shell Scripting Spark
    •  Help you land a Hadoop developer job requiring Apache Spark expertise by giving you  a real-life industry project coupled with 30 demos

  • What skills will you learn?

    By completing this Apache Spark and Scala course you will be able to:
    • Understand the limitations of MapReduce and the role of Spark in overcoming these limitations
    • Understand the fundamentals of the Scala programming language and its features
    • Explain and master the process of installing Spark as a standalone cluster
    • Develop expertise in using Resilient Distributed Datasets (RDD) for creating applications in Spark
    • Master Structured Query Language (SQL) using SparkSQL
    • Gain a thorough understanding of Spark streaming features
    • Master and describe the features of Spark ML programming and GraphX programming

  • Who are eligible to take this Hadoop Spark & Scala Certification Training course?

    Apache Spark & Scala career opportunities are on the rise, and Hadoop is quickly becoming a must-know technology in Big Data architecture. Spark & Scala Certification training is best suited for IT, data management, and analytics professionals looking to gain expertise in Apache Spark & Scala Programming, including:

    • Professionals aspiring for a career in the field of real-time big data analytics
    • Analytics professionals
    • Research professionals
    • IT developers and testers
    • Data scientists
    • BI and reporting professionals
    • Students who wish to gain a thorough understanding of Apache Spark

  • What projects are included in this Spark training course?

    This Apache Spark and Scala training course has one project. In this project scenario, a U.S.based university has collected datasets which represent reviews of movies from multiple reviewers. To gain in-depth insights from the research data collected, you must perform a series of tasks in Spark on the dataset provided.

Course preview

    • Lesson 00 - Course Overview 04:12
      • 0.1 Introduction00:13
      • 0.2 Course Objectives00:28
      • 0.3 Course Overview00:38
      • 0.4 Target Audience00:31
      • 0.5 Course Prerequisites00:21
      • 0.6 Value to the Professionals00:48
      • 0.7 Value to the Professionals (contd.)00:20
      • 0.8 Value to the Professionals (contd.)00:21
      • 0.9 Lessons Covered00:24
      • 0.10 Conclusion00:08
    • Lesson 01 - Introduction to Spark 25:34
      • 1.1 Introduction00:15
      • 1.2 Objectives00:26
      • 1.3 Evolution of Distributed Systems
      • 1.4 Need of New Generation Distributed Systems01:15
      • 1.5 Limitations of MapReduce in Hadoop01:06
      • 1.6 Limitations of MapReduce in Hadoop (contd.)01:07
      • 1.7 Batch vs. Real-Time Processing01:09
      • 1.8 Application of Stream Processing00:07
      • 1.9 Application of In-Memory Processing01:48
      • 1.10 Introduction to Apache Spark00:45
      • 1.11 Components of a Spark Project
      • 1.12 History of Spark00:50
      • 1.13 Language Flexibility in Spark00:55
      • 1.14 Spark Execution Architecture01:13
      • 1.15 Automatic Parallelization of Complex Flows00:59
      • 1.16 Automatic Parallelization of Complex Flows-Important Points01:13
      • 1.17 APIs That Match User Goals01:06
      • 1.18 Apache Spark-A Unified Platform of Big Data Apps01:38
      • 1.19 More Benefits of Apache Spark01:05
      • 1.20 Running Spark in Different Modes00:41
      • 1.21 Installing Spark as a Standalone Cluster-Configurations
      • 1.22 Installing Spark as a Standalone Cluster-Configurations00:08
      • 1.23 Demo-Install Apache Spark00:08
      • 1.24 Demo-Install Apache Spark02:41
      • 1.25 Overview of Spark on a Cluster00:47
      • 1.26 Tasks of Spark on a Cluster00:37
      • 1.27 Companies Using Spark-Use Cases00:46
      • 1.28 Hadoop Ecosystem vs. Apache Spark00:32
      • 1.29 Hadoop Ecosystem vs. Apache Spark (contd.)00:43
      • 1.30 Quiz
      • 1.31 Summary00:40
      • 1.32 Summary (contd.)00:41
      • 1.33 Conclusion00:13
    • Lesson 02 - Introduction to Programming in Scala 37:35
      • 2.1 Introduction00:11
      • 2.2 Objectives00:16
      • 2.3 Introduction to Scala01:32
      • 2.4 Features of Scala
      • 2.5 Basic Data Types00:24
      • 2.6 Basic Literals00:35
      • 2.7 Basic Literals (contd.)00:25
      • 2.8 Basic Literals (contd.)00:21
      • 2.9 Introduction to Operators00:31
      • 2.10 Types of Operators
      • 2.11 Use Basic Literals and the Arithmetic Operator00:08
      • 2.12 Demo Use Basic Literals and the Arithmetic Operator03:18
      • 2.13 Use the Logical Operator00:07
      • 2.14 Demo Use the Logical Operator01:40
      • 2.15 Introduction to Type Inference00:34
      • 2.16 Type Inference for Recursive Methods00:10
      • 2.17 Type Inference for Polymorphic Methods and Generic Classes00:30
      • 2.18 Unreliability on Type Inference Mechanism00:23
      • 2.19 Mutable Collection vs. Immutable Collection01:13
      • 2.20 Functions00:21
      • 2.21 Anonymous Functions00:22
      • 2.22 Objects01:08
      • 2.23 Classes00:36
      • 2.24 Use Type Inference, Functions, Anonymous Function, and Class00:09
      • 2.25 Demo Use Type Inference, Functions, Anonymous Function and Class07:40
      • 2.26 Traits as Interfaces00:57
      • 2.27 Traits-Example00:09
      • 2.28 Collections00:42
      • 2.29 Types of Collections00:25
      • 2.30 Types of Collections (contd.)00:26
      • 2.31 Lists00:28
      • 2.32 Perform Operations on Lists00:07
      • 2.33 Demo Use Data Structures04:10
      • 2.34 Maps00:46
      • 2.35 Maps-Operations
      • 2.36 Pattern Matching00:33
      • 2.37 Implicits00:37
      • 2.38 Implicits (contd.)00:18
      • 2.39 Streams00:22
      • 2.40 Use Data Structures00:07
      • 2.41 Demo Perform Operations on Lists03:25
      • 2.42 Quiz
      • 2.43 Summary00:37
      • 2.44 Summary (contd.)00:37
      • 2.45 Conclusion00:15
    • Lesson 03 - Using RDD for Creating Applications in Spark 51:02
      • 3.1 Introduction00:12
      • 3.2 Objectives00:23
      • 3.3 RDDs API01:40
      • 3.4 Features of RDDs
      • 3.5 Creating RDDs00:36
      • 3.6 Creating RDDs—Referencing an External Dataset00:19
      • 3.7 Referencing an External Dataset—Text Files00:51
      • 3.8 Referencing an External Dataset—Text Files (contd.)00:50
      • 3.9 Referencing an External Dataset—Sequence Files00:33
      • 3.10 Referencing an External Dataset—Other Hadoop Input Formats00:46
      • 3.11 Creating RDDs—Important Points01:09
      • 3.12 RDD Operations00:38
      • 3.13 RDD Operations—Transformations00:47
      • 3.14 Features of RDD Persistence00:57
      • 3.15 Storage Levels Of RDD Persistence00:20
      • 3.16 Choosing The Correct RDD Persistence Storage Level
      • 3.17 Invoking the Spark Shell00:23
      • 3.18 Importing Spark Classes00:14
      • 3.19 Creating the SparkContext00:26
      • 3.20 Loading a File in Shell00:11
      • 3.21 Performing Some Basic Operations on Files in Spark Shell RDDs00:20
      • 3.22 Packaging a Spark Project with SBT00:50
      • 3.23 Running a Spark Project With SBT00:32
      • 3.24 Demo-Build a Scala Project00:07
      • 3.25 Build a Scala Project06:51
      • 3.26 Demo-Build a Spark Java Project00:08
      • 3.27 Build a Spark Java Project04:31
      • 3.28 Shared Variables—Broadcast01:21
      • 3.29 Shared Variables—Accumulators00:52
      • 3.30 Writing a Scala Application00:20
      • 3.31 Demo-Run a Scala Application00:07
      • 3.32 Run a Scala Application01:43
      • 3.33 Demo-Write a Scala Application Reading the Hadoop Data00:07
      • 3.34 Write a Scala Application Reading the Hadoop Data01:23
      • 3.35 Demo-Run a Scala Application Reading the Hadoop Data00:08
      • 3.36 Run a Scala Application Reading the Hadoop Data02:21
      • 3.37 Scala RDD Extensions
      • 3.38 DoubleRDD Methods00:08
      • 3.39 PairRDD Methods—Join00:47
      • 3.40 PairRDD Methods—Others00:06
      • 3.41 Java PairRDD Methods00:09
      • 3.42 Java PairRDD Methods (contd.)00:06
      • 3.43 General RDD Methods00:06
      • 3.44 General RDD Methods (contd.)00:05
      • 3.45 Java RDD Methods00:08
      • 3.46 Java RDD Methods (contd.)00:06
      • 3.47 Common Java RDD Methods00:10
      • 3.48 Spark Java Function Classes00:13
      • 3.49 Method for Combining JavaPairRDD Functions00:42
      • 3.50 Transformations in RDD00:34
      • 3.51 Other Methods00:07
      • 3.52 Actions in RDD00:08
      • 3.53 Key-Value Pair RDD in Scala00:32
      • 3.54 Key-Value Pair RDD in Java00:43
      • 3.55 Using MapReduce and Pair RDD Operations00:25
      • 3.56 Reading Text File from HDFS00:16
      • 3.57 Reading Sequence File from HDFS00:21
      • 3.58 Writing Text Data to HDFS00:18
      • 3.59 Writing Sequence File to HDFS00:12
      • 3.60 Using GroupBy00:07
      • 3.61 Using GroupBy (contd.)00:05
      • 3.62 Demo-Run a Scala Application Performing GroupBy Operation00:08
      • 3.63 Run a Scala Application Performing GroupBy Operation03:13
      • 3.64 Demo-Run a Scala Application Using the Scala Shell00:07
      • 3.65 Run a Scala Application Using the Scala Shell04:02
      • 3.66 Demo-Write and Run a Java Application00:06
      • 3.67 Write and Run a Java Application01:49
      • 3.68 Quiz
      • 3.69 Summary00:53
      • 3.70 Summary (contd.)00:59
      • 3.71 Conclusion00:15
    • Lesson 04 - Running SQL Queries Using Spark SQL 30:24
      • 4.1 Introduction00:12
      • 4.2 Objectives00:17
      • 4.3 Importance of Spark SQL01:02
      • 4.4 Benefits of Spark SQL00:47
      • 4.5 DataFrames00:50
      • 4.6 SQLContext00:50
      • 4.7 SQLContext (contd.)01:13
      • 4.8 Creating a DataFrame00:11
      • 4.9 Using DataFrame Operations00:22
      • 4.10 Using DataFrame Operations (contd.)00:05
      • 4.11 Demo-Run SparkSQL with a Dataframe00:06
      • 4.12 Run SparkSQL with a Dataframe08:53
      • 4.13 Interoperating with RDDs
      • 4.14 Using the Reflection-Based Approach00:38
      • 4.15 Using the Reflection-Based Approach (contd.)00:08
      • 4.16 Using the Programmatic Approach00:44
      • 4.17 Using the Programmatic Approach (contd.)00:07
      • 4.18 Demo-Run Spark SQL Programmatically00:08
      • 4.19 Run Spark SQL Programmatically00:01
      • 4.20 Data Sources
      • 4.21 Save Modes00:32
      • 4.22 Saving to Persistent Tables00:46
      • 4.23 Parquet Files00:19
      • 4.24 Partition Discovery00:38
      • 4.25 Schema Merging00:29
      • 4.26 JSON Data00:34
      • 4.27 Hive Table00:45
      • 4.28 DML Operation-Hive Queries00:27
      • 4.29 Demo-Run Hive Queries Using Spark SQL00:07
      • 4.30 Run Hive Queries Using Spark SQL04:58
      • 4.31 JDBC to Other Databases00:49
      • 4.32 Supported Hive Features00:38
      • 4.33 Supported Hive Features (contd.)00:22
      • 4.34 Supported Hive Data Types00:13
      • 4.35 Case Classes00:15
      • 4.36 Case Classes (contd.)00:07
      • 4.37 Quiz
      • 4.38 Summary00:49
      • 4.39 Summary (contd.)00:49
      • 4.40 Conclusion00:13
    • Lesson 05 - Spark Streaming 35:09
      • 5.1 Introduction00:11
      • 5.2 Objectives00:15
      • 5.3 Introduction to Spark Streaming00:50
      • 5.4 Working of Spark Streaming00:20
      • 5.5 Features of Spark Streaming
      • 5.6 Streaming Word Count01:34
      • 5.7 Micro Batch00:19
      • 5.8 DStreams00:34
      • 5.9 DStreams (contd.)00:39
      • 5.10 Input DStreams and Receivers01:19
      • 5.11 Input DStreams and Receivers (contd.)00:55
      • 5.12 Basic Sources01:14
      • 5.13 Advanced Sources00:49
      • 5.14 Advanced Sources-Twitter
      • 5.15 Transformations on DStreams00:15
      • 5.16 Transformations on Dstreams (contd.)00:06
      • 5.17 Output Operations on DStreams00:29
      • 5.18 Design Patterns for Using ForeachRDD01:15
      • 5.19 DataFrame and SQL Operations00:26
      • 5.20 DataFrame and SQL Operations (contd.)00:20
      • 5.21 Checkpointing01:25
      • 5.22 Enabling Checkpointing00:39
      • 5.23 Socket Stream01:00
      • 5.24 File Stream00:12
      • 5.25 Stateful Operations00:28
      • 5.26 Window Operations01:22
      • 5.27 Types of Window Operations00:12
      • 5.28 Types of Window Operations Types (contd.)00:06
      • 5.29 Join Operations-Stream-Dataset Joins00:21
      • 5.30 Join Operations-Stream-Stream Joins00:34
      • 5.31 Monitoring Spark Streaming Application01:19
      • 5.32 Performance Tuning-High Level00:20
      • 5.33 Performance Tuning-Detail Level
      • 5.34 Demo-Capture and Process the Netcat Data00:07
      • 5.35 Capture and Process the Netcat Data05:01
      • 5.36 Demo-Capture and Process the Flume Data00:08
      • 5.37 Capture and Process the Flume Data05:08
      • 5.38 Demo-Capture the Twitter Data00:07
      • 5.39 Capture the Twitter Data02:33
      • 5.40 Quiz
      • 5.41 Summary01:01
      • 5.42 Summary (contd.)01:04
      • 5.43 Conclusion00:12
    • Lesson 06 - Spark ML Programming 40:08
      • 6.1 Introduction00:12
      • 6.2 Objectives00:20
      • 6.3 Introduction to Machine Learning01:36
      • 6.4 Common Terminologies in Machine Learning
      • 6.5 Applications of Machine Learning00:22
      • 6.6 Machine Learning in Spark00:34
      • 6.7 Spark ML API
      • 6.8 DataFrames00:32
      • 6.9 Transformers and Estimators00:59
      • 6.10 Pipeline00:48
      • 6.11 Working of a Pipeline01:41
      • 6.12 Working of a Pipeline (contd.)00:45
      • 6.13 DAG Pipelines00:33
      • 6.14 Runtime Checking00:21
      • 6.15 Parameter Passing01:00
      • 6.16 General Machine Learning Pipeline-Example00:05
      • 6.17 General Machine Learning Pipeline-Example (contd.)
      • 6.18 Model Selection via Cross-Validation01:16
      • 6.19 Supported Types, Algorithms, and Utilities00:31
      • 6.20 Data Types01:26
      • 6.21 Feature Extraction and Basic Statistics00:43
      • 6.22 Clustering00:38
      • 6.23 K-Means00:55
      • 6.24 K-Means (contd.)00:05
      • 6.25 Demo-Perform Clustering Using K-Means00:07
      • 6.26 Perform Clustering Using K-Means04:41
      • 6.27 Gaussian Mixture00:57
      • 6.28 Power Iteration Clustering (PIC)01:17
      • 6.29 Latent Dirichlet Allocation (LDA)00:35
      • 6.30 Latent Dirichlet Allocation (LDA) (contd.)01:45
      • 6.31 Collaborative Filtering01:13
      • 6.32 Classification00:16
      • 6.33 Classification (contd.)00:06
      • 6.34 Regression00:42
      • 6.35 Example of Regression00:56
      • 6.36 Demo-Perform Classification Using Linear Regression00:08
      • 6.37 Perform Classification Using Linear Regression02:01
      • 6.38 Demo-Run Linear Regression00:06
      • 6.39 Run Linear Regression02:14
      • 6.40 Demo-Perform Recommendation Using Collaborative Filtering00:05
      • 6.41 Perform Recommendation Using Collaborative Filtering02:23
      • 6.42 Demo-Run Recommendation System00:06
      • 6.43 Run Recommendation System02:45
      • 6.44 Quiz
      • 6.45 Summary01:14
      • 6.46 Summary (contd.)00:57
      • 6.47 Conclusion00:12
    • Lesson 07 - Spark GraphX Programming 46:26
      • 7.001 Introduction00:14
      • 7.002 Objectives00:17
      • 7.003 Introduction to Graph-Parallel System01:14
      • 7.004 Limitations of Graph-Parallel System00:49
      • 7.005 Introduction to GraphX01:21
      • 7.006 Introduction to GraphX (contd.)00:06
      • 7.007 Importing GraphX00:10
      • 7.008 The Property Graph01:25
      • 7.009 The Property Graph (contd.)00:07
      • 7.010 Features of the Property Graph
      • 7.011 Creating a Graph00:14
      • 7.012 Demo-Create a Graph Using GraphX00:07
      • 7.013 Create a Graph Using GraphX10:08
      • 7.014 Triplet View00:30
      • 7.015 Graph Operators00:51
      • 7.016 List of Operators00:23
      • 7.017 List of Operators (contd.)00:05
      • 7.018 Property Operators00:18
      • 7.019 Structural Operators01:02
      • 7.020 Subgraphs00:21
      • 7.021 Join Operators01:09
      • 7.022 Demo-Perform Graph Operations Using GraphX00:07
      • 7.023 Perform Graph Operations Using GraphX05:46
      • 7.024 Demo-Perform Subgraph Operations00:07
      • 7.025 Perform Subgraph Operations01:37
      • 7.026 Neighborhood Aggregation00:43
      • 7.027 mapReduceTriplets00:42
      • 7.028 Demo-Perform MapReduce Operations00:08
      • 7.029 Perform MapReduce Operations09:18
      • 7.030 Counting Degree of Vertex00:32
      • 7.031 Collecting Neighbors00:28
      • 7.032 Caching and Uncaching01:10
      • 7.033 Graph Builders
      • 7.034 Vertex and Edge RDDs01:17
      • 7.035 Graph System Optimizations01:22
      • 7.036 Built-in Algorithms
      • 7.037 Quiz
      • 7.038 Summary01:12
      • 7.039 Summary (contd.)00:55
      • 7.040 Conclusion00:11
    • {{childObj.title}}
      • {{childObj.childSection.chapter_name}}
        • {{lesson.title}}
      • {{lesson.title}}

    View More

    View Less

Exam & certification

  • How do I become a certified Apache Spark and Scala professional?

    To become a Certified Apache Spark and Scala professional it is mandatory to fulfill both of the following criteria:
    • You must complete a project given by Simplilearn that is evaluated by the lead trainer. Your project may be submitted through the learning management system (LMS). If you have any questions or difficulties while working on the project, you may get assistance and clarification from our experts at SimpliTalk. If you have any further issues you may look to our Online Classroom Training, where you may attend any of the ongoing batches of Apache Spark and Scala Certification Training classes to get help with your project.
    •  A minimum score of 80 percent is required to pass the online examination. If you don’t pass the online exam on the first attempt, you are allowed to retake the exam once.
    • At the end of the Scala course, you will receive an experience certificate stating that you have three months experience implementing Spark and Scala.

  • What are the prerequisites for the Scala course?

    The prerequisites for the Apache Spark and Scala course are:
    • Fundamental knowledge of any programming language
    • Basic understanding of any database, SQL and query language for databases
    • Working knowledge of Linux- or Unix-based systems (not mandatory)
    • Certification training as a Big Data Hadoop developer (recommended)

  • What do I need to do to unlock my Simplilearn certificate?

    Online Classroom:
    • Attend one complete batch
    • Complete one project and one simulation test with a minimum score of 60 percent
    Online Self-Learning:
    • Complete 85 percent of the course
    • Complete one project and one simulation test with a minimum score of 60 percent

  • What are the Benefits of Taking Apache Spark Certification Course in Bangalore?

    Businesses and recruiters prefer marketing professionals with genuine knowledge, skills, and experience verified by a certification that is accepted across industries. Continuous learning for any working professional is not only important for keeping themselves up to date with the current market trends, but it also helps them expand their array of skill set and become more flexible in the workplace.
    This Apache Spark & Scala Certification Training Course will not only help learners stand out and polish their existing skillset in the marketing domain, but it will also help them take that necessary leap to bigger and more ambitious roles.

  • What is the Duration of Simplilearn's Apache Spark & Scala Certifications Training Course in Bangalore?

    Simplilearn’s Apache Scala & Spark Certification Training has two learning methodologies.

    • One is the Self-paced e-Learning methodology that has a validity of 180 days (6 months) where learners can work at their own pace through high-quality e-learning video modules.
    • The second methodology is the Online Classroom Flexi-Pass that has a validity of 180 days (6 months) of high-quality e-learning videos plus 90 days of access to 1+ instructor-led online training classes.

  • How Much does it Cost's for taking this Apache Spark & Scala Course in Bangalore?

    Simplilearn’s Apache Spark Certification Training course is priced at INR 8,999/- for Self Paced Learning and INR 16,999/- for Online Classroom Flexi-Pass.

  • Who provides the certification?

    After successful completion of the Apache Spark & Scala course, you will be awarded the course completion certificate from Simplilearn.

  • Is this course accredited?

    No, this course is not officially accredited.

  • What is the passing score for the Apache Spark & Scala exam?

    The Apache Spark & Scala certification exam is 120 minutes long and comprises  60 single or multiple choice questions. The passing score for the exam is 80% (i.e you must answer 48 questions correctly).

  • How many attempts do I have to pass the Apache & Scala exam?

    You have a maximum of two total attempts to pass the exam. You may re-attempt it immediately if you fail the first time. 

  • How long does it take to receive the Apache & Scala course certification?

    Upon successful completion of the Simplilearn’s Apache Scala & Spark online training, you will immediately receive the Apache & Scala course certificate.

  • How long is the Apache & Scala Certification valid for?

    The Apache Spark & Scala certification from Simplilearn has lifelong validity.

  • I have passed the Apache & Scala exam. When and how do I receive my certificate?

    Upon successful completion of this apache spark and scala course online and passing the exam, you will receive the certificate through our Learning Management System, which you can download or share via email or Linkedin.

  • Do you offer a money-back guarantee for the training program?

    Yes. We do offer a money-back guarantee for many of our training programs. Refer to our Refund Policy and submit refund requests via our Help and Support portal.

    Course advisor

    Ronald van Loon
    Ronald van Loon Top 10 Big Data & Data Science Influencer, Director - Adversitement

    Named by Onalytica as one of the three most influential people in Big Data, Ronald is also an author for a number of leading Big Data and Data Science websites, including Datafloq, Data Science Central, and The Guardian. He also regularly speaks at renowned events.

    Reviews

    Amit Pradhan
    Amit Pradhan Assistant Manager at HT Media, Bangalore

    It was really a great learning experience. Big Data course has been instrumental in laying the foundation for the beginners, both in terms of conceptual content as well as the practical lab. Thanks to Simplilearn team that it was not less than a live classroom..Really Appreciate it..

    Read more Read less
    Aravinda Reddy
    Aravinda Reddy Lead Software Engineer at Thomson Reuters, Bangalore

    The training has been very good. Trainer was right on the targeted agenda with great technical skills. He covered all the topics with good number of examples and allowed us to do hands-on as well.

    Read more Read less
    Anjaneya Prasad Nidubrolu
    Anjaneya Prasad Nidubrolu Assistant Consultant at Tata Consultancy Services, Bangalore

    Well-structured course and the instructor is very good. He has a good grip on the subject and clears our doubts instantly and he makes sure that all the students understand things correctly.

    Read more Read less
    Nagarjuna D N
    Nagarjuna D N AT&T, Bangalore

    Getting a high quality training from industry expert at your convenience, affordable with the resources you need to master what you are learning.

    Read more Read less
    Vinod JV
    Vinod JV Lead Software Engineer at Thomson Reuters, Bangalore

    The trainer has excellent knowledge on the subject and is very thorough in answering the doubts. I hope Simplilearn will always continue to give trainers like this.

    Read more Read less
    Arijit Chatterjee
    Arijit Chatterjee Senior Consultant at Capgemini, Bangalore

    It was really a wonderful experience to have such real-time project discussion during the training session. It helped to learn in depth.

    Peter Dao
    Peter Dao Senior Technical Analyst at Sutter Health, Sacramento

    Instructor is very experienced in these topics. I like the examples given in the classes.

    Martin Stufi
    Martin Stufi C.E.O - Solutia, s.r.o., Prague

    Great course! I really recommend it!

    Olga Barrett
    Olga Barrett Career Advisor @ CV Wizard of OZ, Perth

    Great Class. Very interactive. Overview of HDFS and MapReduce was very helpful, it was smooth transition to the Apache Spark and Scala knowledge. The content is good. Overall, excellent training.

    Read more Read less

    FAQs

    • What are the system requirements for taking this course?

      Your system must fulfill  the following requirements:
      • 64-bit Operating System
      • 8GB RAM

    • How will the labs be conducted?

      We will help you set up a virtual machine with local access. The detailed installation guide is provided in the LMS.

    • How is the project completed and how do I get certified?

      Everything you need to complete your project, such as, problem statements and data points, are provided for you in the LMS. If you have other questions, you can contact us.
       
      After completing the Scala course, you will submit your finished project to the trainer for evaluation. Upon successful evaluation of the project and completion of the online exam, you will get certified as a Spark and Scala Professional.
       

    • Who are the instructors/trainers and how are they selected?

      All of our highly-qualified instructors are Apache Spark and Scala certified, with more than 15 years of experience in training and working professionally in the field. Each of them has gone through a rigorous selection process that includes profile screening, technical evaluation and live training demonstration before they are certified to train for us. We also ensure that only those trainers who maintain a high alumni rating continue to train for us.

    • What are the modes of training offered for this Scala Training course?

      We offer two modes of training:
       
      Live Virtual Classroom or Online Classroom: With instructor led online classroom training, you have the option to attend the course remotely from your desktop or laptop via video conferencing. This format improves productivity and decreases time spent away from work or home.
       
      Online Self-Learning: In this mode you will receive lecture videos which you can review at your own pace.

    • What if I miss a class?

      We provide recordings of the class after the session is conducted, so you can catch-up on training before the next session.
       

    • Can I cancel my enrolment? Do I get a refund?

      Yes, you can cancel your enrolment. We provide a complete refund after deducting the administration fee. To know more, please go through our Refund Policy.

    • How do I enroll for the online training?

      You can enroll for this Scala training on our website and make an online payment using any of the following options: 
      • Visa Credit or Debit Card
      • MasterCard
      • American Express
      • Diner’s Club
      • PayPal
      Once payment is received you will automatically receive a payment receipt and access information via email.

    • I’d like to learn more about this training program. Whom should I contact?

      Contact us using the form on the right of any page on the Simplilearn website, or select the Live Chat link. Our customer service representatives can provide you with more details.

    • What is Global Teaching Assistance?

      Our teaching assistants are a dedicated team of subject matter experts here to help you get certified in your first attempt. They engage students proactively to ensure the course path is being followed and help you enrich your learning experience, from class onboarding to project mentoring and job assistance. Teaching Assistance is available during business hours.

    • What is covered under the 24/7 Support promise?

      We offer 24/7 support through email, chat and calls. We also have a dedicated team that provides on-demand assistance through our community forum. What’s more, you will have lifetime access to the community forum, even after completion of your course with us.

    Our Bangalore Correspondence / Mailing address

    # 53/1 C, Manoj Arcade, 24th Main, 2nd Sector, HSR Layout, Bangalore - 560102, Karnataka, India.

    • Disclaimer
    • PMP, PMI, PMBOK, CAPM, PgMP, PfMP, ACP, PBA, RMP, SP, and OPM3 are registered marks of the Project Management Institute, Inc.