Lesson 13 of 13By Simplilearn
Last updated on Dec 15, 2020206905Harvard Business Review referred to data scientist as the "Sexiest Job of the 21st Century." Glassdoor placed it #1 on the 25 Best Jobs in America list. According to IBM, demand for this role will soar 28 percent by 2020.
It should come as no surprise that in the new era of Big Data and Machine Learning, Data Science are in demand and professionals are becoming rockstars. Companies that can leverage massive amounts of data to improve the way they serve customers, build products, and run their operations will be positioned to thrive in this economy.
It's unwise to ignore the importance of data and our capacity to analyze, consolidate, and contextualize it. Data scientists are relied upon to fill this need, but there is a serious lack of qualified candidates worldwide.
If you're moving down the path to becoming a data scientist, you must be prepared to impress prospective employers with your knowledge. In addition to explaining why data science is so important, you'll need to show that you're technically proficient with Big Data concepts, frameworks, and applications.
Here's a list of the most popular data science interview questions you can expect to face, and how to frame your answers.
Want to build a successful career in data science? Check out the Data Science Certification Program today.
Supervised Learning |
Unsupervised Learning |
---|---|
|
|
Logistic regression measures the relationship between the dependent variable (our label of what we want to predict) and one or more independent variables (our features) by estimating probability using its underlying logistic function (sigmoid).
The image shown below depicts how logistic regression works:
The formula and graph for the sigmoid function are as shown:
For example, let's say you want to build a decision tree to decide whether you should accept or decline a job offer. The decision tree for this case is as shown:
It is clear from the decision tree that an offer is accepted if:
A random forest is built up of a number of decision trees. If you split the data into different packages and make a decision tree in each of the different groups of data, the random forest brings all those trees together.
Overfitting refers to a model that is only set for a very small amount of data and ignores the bigger picture. There are three main methods to avoid overfitting:
Univariate data contains only one variable. The purpose of the univariate analysis is to describe the data and find patterns that exist within it.
Example: height of students
Height (in cm) |
164 |
167.3 |
170 |
174.2 |
178 |
180 |
The patterns can be studied by drawing conclusions using mean, median, mode, dispersion or range, minimum, maximum, etc.
Bivariate data involves two different variables. The analysis of this type of data deals with causes and relationships and the analysis is done to determine the relationship between the two variables.
Example: temperature and ice cream sales in the summer season
Temperature (in Celcius) |
Sales |
20 |
2,000 |
25 |
2,100 |
26 |
2,300 |
28 |
2,400 |
30 |
2,600 |
36 |
3,100 |
Here, the relationship is visible from the table that temperature and sales are directly proportional to each other. The hotter the temperature, the better the sales.
Multivariate data involves three or more variables, it is categorized under multivariate. It is similar to a bivariate but contains more than one dependent variable.
Example: data for house price prediction
No. of rooms |
Floors |
Area (sq ft) |
Price |
2 |
0 |
900 |
$4000,00 |
3 |
2 |
1,100 |
$600,000 |
3.5 |
5 |
1,500 |
$900,000 |
4 |
3 |
2,100 |
$1,200,000 |
The patterns can be studied by drawing conclusions using mean, median, and mode, dispersion or range, minimum, maximum, etc. You can start describing the data and using it to guess what the price of the house will be.
There are two main methods for feature selection, i.e, filter, and wrapper methods.
This involves:
The best analogy for selecting features is "bad data in, bad answer out." When we're limiting or selecting the features, it's all about cleaning up the data coming in.
This involves:
Wrapper methods are very labor-intensive, and high-end computers are needed if a lot of data analysis is performed with the wrapper method.
But for multiples of three, print "Fizz" instead of the number, and for the multiples of five, print "Buzz." For numbers which are multiples of both three and five, print "FizzBuzz"
The code is shown below:
Note that the range mentioned is 51, which means zero to 50. However, the range asked in the question is one to 50. Therefore, in the above code, you can include the range as (1,51).
The output of the above code is as shown:
The following are ways to handle missing data values:
If the data set is large, we can just simply remove the rows with missing data values. It is the quickest way; we use the rest of the data to predict the values.
For smaller data sets, we can substitute missing values with the mean or average of the rest of the data using the pandas' data frame in python. There are different ways to do so, such as df.mean(), df.fillna(mean).
plot1 = [1,3]
plot2 = [2,5]
The Euclidean distance can be calculated as follows:
euclidean_distance = sqrt( (plot1[0]-plot2[0])**2 + (plot1[1]-plot2[1])**2 )
Check out the Simplilearn's video on "Data Science Interview Question" curated by industry experts to help you prepare for an interview.
Dimensionality reduction refers to the process of converting a data set with vast dimensions into data with fewer dimensions (fields) to convey similar information concisely.
This reduction helps in compressing data and reducing storage space. It also reduces computation time as fewer dimensions lead to less computing. It removes redundant features; for example, there's no point in storing a value in two different units (meters and inches).
-2 |
-4 |
2 |
-2 |
1 |
2 |
4 |
2 |
5 |
The characteristic equation is as shown:
Expanding determinant:
(-2 – λ) [(1-λ) (5-λ)-2x2] + 4[(-2) x (5-λ) -4x2] + 2[(-2) x 2-4(1-λ)] =0
- λ3 + 4λ2 + 27λ – 90 = 0,
λ3 - 4 λ2 -27 λ + 90 = 0
Here we have an algebraic equation built from the eigenvectors.
By hit and trial:
33 – 4 x 32 - 27 x 3 +90 = 0
Hence, (λ - 3) is a factor:
λ3 - 4 λ2 - 27 λ +90 = (λ – 3) (λ2 – λ – 30)
Eigenvalues are 3,-5,6:
(λ – 3) (λ2 – λ – 30) = (λ – 3) (λ+5) (λ-6),
Calculate eigenvector for λ = 3
For X = 1,
-5 - 4Y + 2Z =0,
-2 - 2Y + 2Z =0
Subtracting the two equations:
3 + 2Y = 0,
Subtracting back into second equation:
Y = -(3/2)
Z = -(1/2)
Similarly, we can calculate the eigenvectors for -5 and 6.
The steps to maintain a deployed model are:
Constant monitoring of all models is needed to determine their performance accuracy. When you change something, you want to figure out how your changes are going to affect things. This needs to be monitored to ensure it's doing what it's supposed to do.
Evaluation metrics of the current model are calculated to determine if a new algorithm is needed.
The new models are compared to each other to determine which model performs the best.
The best performing model is re-built on the current state of data.
A recommender system predicts what a user would rate a specific product based on their preferences. It can be split into two different areas:
As an example, Last.fm recommends tracks that other users with similar interests play often. This is also commonly seen on Amazon after making a purchase; customers may notice the following message accompanied by product recommendations: "Users who bought this also bought…"
As an example: Pandora uses the properties of a song to recommend music with similar properties. Here, we look at content, instead of looking at who else is listening to music.
RMSE and MSE are two of the most common measures of accuracy for a linear regression model.
RMSE indicates the Root Mean Square Error.
MSE indicates the Mean Square Error.
We use the elbow method to select k for k-means clustering. The idea of the elbow method is to run k-means clustering on the data set where 'k' is the number of clusters.
Within the sum of squares (WSS), it is defined as the sum of the squared distance between each member of the cluster and its centroid.
p-value typically ≤ 0.05
This indicates strong evidence against the null hypothesis; so you reject the null hypothesis.
p-value typically > 0.05
This indicates weak evidence against the null hypothesis, so you accept the null hypothesis.
p-value at cutoff 0.05
This is considered to be marginal, meaning it could go either way.
You can drop outliers only if it is a garbage value.
Example: height of an adult = abc ft. This cannot be true, as the height cannot be a string value. In this case, outliers can be removed.
If the outliers have extreme values, they can be removed. For example, if all the data points are clustered between zero to 10, but one point lies at 100, then we can remove this point.
If you cannot drop outliers, you can try the following:
It is stationary when the variance and mean of the series are constant with time.
Here is a visual example:
In the first graph, the variance is constant with time. Here, X is the time factor and Y is the variable. The value of Y goes through the same points all the time; in other words, it is stationary.
In the second graph, the waves get bigger, which means it is non-stationary and the variance is changing with time.
Consider this confusion matrix:
You can see the values for total data, actual values, and predicted values.
The formula for accuracy is:
Accuracy = (True Positive + True Negative) / Total Observations
= (262 + 347) / 650
= 609 / 650
= 0.93
As a result, we get an accuracy of 93 percent.
Consider the same confusion matrix used in the previous question.
Precision = (True positive) / (True Positive + False Positive)
= 262 / 277
= 0.94
Recall Rate = (True Positive) / (Total Positive + False Negative)
= 262 / 288
= 0.90
The recommendation engine is accomplished with collaborative filtering. Collaborative filtering explains the behavior of other users and their purchase history in terms of ratings, selection, etc.
The engine makes predictions on what might interest a person based on the preferences of other users. In this algorithm, item features are unknown.
For example, a sales page shows that a certain number of people buy a new phone and also buy tempered glass at the same time. Next time, when a person buys a phone, he or she may see a recommendation to buy tempered glass as well.
Usually, we have order tables and customer tables that contain the following columns:
Order Table
Orderid
customerId
OrderNumber
TotalAmount
Customer Table
Id
FirstName
LastName
City
Country
The SQL query is:
SELECT OrderNumber, TotalAmount, FirstName, LastName, City, Country
FROM Order
JOIN Customer
ON Order.CustomerId = Customer.Id
Cancer detection results in imbalanced data. In an imbalanced dataset, accuracy should not be based as a measure of performance. It is important to focus on the remaining four percent, which represents the patients who were wrongly diagnosed. Early diagnosis is crucial when it comes to cancer detection, and can greatly improve a patient's prognosis.
Hence, to evaluate model performance, we should use Sensitivity (True Positive Rate), Specificity (True Negative Rate), F measure to determine the class wise performance of the classifier.
The K nearest neighbor algorithm can be used because it can compute the nearest neighbor and if it doesn't have a value, it just computes the nearest neighbor based on all the other features.
When you're dealing with K-means clustering or linear regression, you need to do that in your pre-processing, otherwise, they'll crash. Decision trees also have the same problem, although there is some variance.
[0, 0, 0, 1, 1, 1, 1, 1]
Choose the correct answer.
The target variable, in this case, is 1.
The formula for calculating the entropy is:
Putting p=5 and n=8, we get
Entropy = A = -(5/8 log(5/8) + 3/8 log(3/8))
Choose the correct option:
The most appropriate algorithm for this case is A, logistic regression.
Choose the correct option:
As we are looking for grouping people together specifically by four different similarities, it indicates the value of k. Therefore, K-means clustering (answer A) is the most appropriate algorithm for this study.
Choose the right answer:
The answer is A: {grape, apple} must be a frequent itemset
The answer is A: One-way ANOVA
A feature vector is an n-dimensional vector of numerical features that represent an object. In machine learning, feature vectors are used to represent numeric or symbolic characteristics (called features) of an object in a mathematical way that's easy to analyze.
Root cause analysis was initially developed to analyze industrial accidents but is now widely used in other areas. It is a problem-solving technique used for isolating the root causes of faults or problems. A factor is called a root cause if its deduction from the problem-fault-sequence averts the final undesirable event from recurring.
Logistic regression is also known as the logit model. It is a technique used to forecast the binary outcome from a linear combination of predictor variables.
Recommender systems are a subclass of information filtering systems that are meant to predict the preferences or ratings that a user would give to a product.
Cross-validation is a model validation technique for evaluating how the outcomes of a statistical analysis will generalize to an independent data set. It is mainly used in backgrounds where the objective is to forecast and one wants to estimate how accurately a model will accomplish in practice.
The goal of cross-validation is to term a data set to test the model in the training phase (i.e. validation data set) to limit problems like overfitting and gain insight into how the model will generalize to an independent data set.
Most recommender systems use this filtering process to find patterns and information by collaborating perspectives, numerous data sources, and several agents.
They do not, because in some cases, they reach a local minima or a local optima point. You would not reach the global optima point. This is governed by the data and the starting conditions.
This is statistical hypothesis testing for randomized experiments with two variables, A and B. The objective of A/B testing is to detect any changes to a web page to maximize or increase the outcome of a strategy.
It is a theorem that describes the result of performing the same experiment very frequently. This theorem forms the basis of frequency-style thinking. It states that the sample mean, sample variance, and sample standard deviation converge to what they are trying to estimate.
These are extraneous variables in a statistical model that correlates directly or inversely with both the dependent and the independent variable. The estimate fails to account for the confounding factor.
It is a traditional database schema with a central table. Satellite tables map IDs to physical names or descriptions and can be connected to the central fact table using the ID fields; these tables are known as lookup tables and are principally useful in real-time applications, as they save a lot of memory. Sometimes, star schemas involve several layers of summarization to recover information faster.
You will want to update an algorithm when:
Eigenvalues are the directions along which a particular linear transformation acts by flipping, compressing, or stretching.
Eigenvectors are for understanding linear transformations. In data analysis, we usually calculate the eigenvectors for a correlation or covariance matrix.
Resampling is done in any of these cases:
Selection bias, in general, is a problematic situation in which error is introduced due to a non-random population sample.
Survivorship bias is the logical error of focusing on aspects that support surviving a process and casually overlooking those that did not because of their lack of prominence. This can lead to wrong conclusions in numerous ways.
The underlying principle of this technique is that several weak learners combine to provide a strong learner. The steps involved are:
This exhaustive list is sure to strengthen your preparation for data science interview questions.
Are you looking forward to become a Data Science expert? This career guide is a perfect read to get you started in the thriving field of Data Science. Download the eBook now!
For data scientists, the work isn't easy, but it's rewarding and there are plenty of available positions out there. These data science interview questions can help you get one step closer to your dream job. So, prepare yourself for the rigors of interviewing and stay sharp with the nuts and bolts of data science.
Simplilearn's comprehensive Post Graduate Program in Data Science, in partnership with Purdue University and in collaboration with IBM will prepare you for one of the world's most exciting technology frontiers.
Name | Date | Place | |
---|---|---|---|
Data Scientist | Class starts on 30th Jan 2021, Weekend batch | Your City | View Details |
Data Scientist | Class starts on 31st Jan 2021, Weekdays batch | Chicago | View Details |
Data Scientist | Class starts on 1st Feb 2021, Weekdays batch | Los Angeles | View Details |
Simplilearn is one of the world’s leading providers of online training for Digital Marketing, Cloud Computing, Project Management, Data Science, IT, Software Development, and many other emerging technologies.
Data Scientist
Data Science with R Programming
Data Science with Python
*Lifetime access to high-quality, self-paced e-learning content.
Explore CategoryHow to Become a Data Scientist?
Data Science Career Guide: A comprehensive playbook to becoming a Data Scientist
Top Data Science Books for an Aspiring Data Scientist
How Tech and Learning Will Redefine the Future of Work
How to Build a Career in Data Science?
Data Science Interview Guide