Lesson 6 of 6By Shruti M
Last updated on Dec 31, 202038725Data analytics is widely used in every sector in the 21st century. A career in the field of data analytics is highly lucrative in today's times, with its career potential increasing by the day. Out of the many job roles in this field, a data analyst's job role is widely popular globally. A data analyst collects and processes data; he/she analyzes large datasets to derive meaningful insights from raw data.
If you have plans to apply for a data analyst's post, then there are a set of data analyst interview questions that you have to be prepared for. In this article, you will be acquainted with the top data analyst interview questions, which will guide you in your interview process.
For your convenience and level, we have segregated the questions into the following categories:
So, let’s start with our Beginner Level Data Analyst interview questions.
Here are a set of common Data Analyst interview questions that are aimed at beginners.
Data Mining | Data Profiting |
Data mining is the process of discovering relevant information that has not yet been identified before. | Data profiling is done to evaluate a dataset for its uniqueness, logic, and consistency. |
In data mining, raw data is converted into valuable information. | It cannot identify inaccurate or incorrect data values. |
Data Wrangling is the process wherein raw data is cleaned, structured, and enriched into a desired usable format for better decision making. It involves discovering, structuring, cleaning, enriching, validating, and analyzing data. This process can turn and map out large amounts of data extracted from various sources into a more useful format. Techniques such as merging, grouping, concatenating, joining, and sorting are used to analyze the data. Thereafter it gets ready to be used with another dataset.
The various steps involved in any common analytics projects are as follows:
Understand the business problem, define the organizational goals, and plan for a lucrative solution.
Gather the right data from various sources and other information based on your priorities.
Clean the data to remove unwanted, redundant, and missing values, and make it ready for analysis.
Use data visualization and business intelligence tools, data mining techniques, and predictive modeling to analyze data.
Interpret the results to find out hidden patterns, future trends, and gain insights.
The common problems steps involved in any analytics project are:
As a data analyst, you are expected to know the tools mentioned below for analysis and presentation purposes. Some of the popular tools you should know are:
For working with data stored in relational databases
For creating reports and dashboards
For statistical analysis, data modeling, and exploratory analysis
For presentation, displaying the final results and important conclusions
There are four methods to handle missing values in a dataset.
In the listwise deletion method, an entire record is excluded from analysis if any single value is missing.
Take the average value of the other participants' responses and fill in the missing value.
You can use multiple-regression analyses to estimate a missing value.
It creates plausible values based on the correlations for the missing data and then averages the simulated datasets by incorporating random errors in your predictions.
Normal Distribution refers to a continuous probability distribution that is symmetric about the mean. In a graph, normal distribution will appear as a bell curve.
Time Series analysis is a statistical procedure that deals with the ordered sequence of values of a variable at equally spaced time intervals. Time series data are collected at adjacent periods. So, there is a correlation between the observations. This feature distinguishes time-series data from cross-sectional data.
Below is an example of time-series data on coronavirus cases and its graph.
Data Joining | Data Blending |
Data joining can only be carried out when the data comes from the same source. |
Data blending is used when the data is from two or more different sources. |
E.g: Combining two or more worksheets from the same Excel file or two tables from the same databases. All the combined sheets or tables contain a common set of dimensions and measures. |
E.g: Combining the Oracle table with SQL Server, or combining Excel sheet and Oracle table or two sheets from Excel. Meanwhile, in data blending, each data source contains its own set of dimensions and measures. |
Overfitting |
Underfitting |
The model trains the data well using the training set. | Here, the model neither trains the data well nor can generalize to new data. |
The performance drops considerably over the test set. | Performs poorly both on the train and the test set. |
Happens when the model learns the random fluctuations and noise in the training dataset in detail. |
This happens when there is lesser data to build an accurate model and when we try to develop a linear model using non-linear data. |
VLOOKUP is used when you need to find things in a table or a range by row.
VLOOKUP accepts the following four parameters:
lookup_value - The value to look for in the first column of a table
table - The table from where you can extract value
col_index - The column from which to extract value
range_lookup - [optional] TRUE = approximate match (default). FALSE = exact match
Let’s understand VLOOKUP with an example.
If you wanted to find the department to which Stuart belongs to, you could use the VLOOKUP function as shown below:
Here, A11 cell has the lookup value, A2:E7 is the table array, 3 is the column index number with information about departments, and 0 is the range lookup.
If you hit enter, it will return “Marketing”, indicating that Stuart is from the marketing department.
To subset or filter data in SQL, we use WHERE and HAVING clauses.
Consider the following movie table.
Using this table, let’s find the records for movies that were directed by Brad Bird.
Now, let’s filter the table for directors whose movies have an average duration greater than 115 minutes.
WHERE |
HAVING |
WHERE clause operates on row data. | The HAVING clause operates on aggregated data. |
In the WHERE clause, the filter occurs before any groupings are made. |
HAVING is used to filter values from a group. |
Aggregate functions cannot be used. | Aggregate functions can be used. |
Syntax of WHERE clause:
SELECT column1, column2, ...
FROM table_name
WHERE condition;
Syntax of HAVING clause;
SELECT column_name(s)
FROM table_name
WHERE condition
GROUP BY column_name(s)
HAVING condition
ORDER BY column_name(s);
There are two ways to create a Pandas data frame.
To create a DataFrame in Python, you need to import the Pandas library and use the read_csv function to load the .csv file. Give the right location where the file name and its extension follow the dataset.
To display the head of the dataset, use the head() function.
The ‘describe’ method is used to return the summary statistics in Python.
You can use the column names to extract the desired columns.
With that we have come to an end of the beginner-level data analyst interview questions, now let’s head to the next section.
An outlier is a data point that is distant from other similar points. They may be due to variability in the measurement or may indicate experimental errors.
The graph depicted below shows there are three outliers in the dataset.
To deal with outliers, you can use the following four methods:
Descriptive |
Predictive |
Prescriptive |
It provides insights into the past to answer “what has happened” |
Understands the future to answer “what could happen” |
Suggest various courses of action to answer “what should you do” |
Uses data aggregation and data mining techniques |
Uses statistical models and forecasting techniques |
Uses simulation algorithms and optimization techniques to advise possible outcomes |
Example: An ice cream company can analyze how much ice cream was sold, which flavors were sold, and whether more or less ice cream was sold than the day before |
Example: An ice cream company can analyze how much ice cream was sold, which flavors were sold, and whether more or less ice cream was sold than the day before |
Example: Lower prices to increase the sale of ice creams, produce more/fewer quantities of a specific flavor of ice cream |
Sampling is a statistical method to select a subset of data from an entire dataset (population) to estimate the characteristics of the whole population.
There are majorly five types of sampling methods:
Learn the technologies and skills currently used in data analytics and data science, including statistics, Python, R, Tableau, SQL, and Power BI with the Post Graduate Program in Data Analytics.
Hypothesis testing is the procedure used by statisticians and scientists to accept or reject statistical hypotheses. There are mainly two types of hypothesis testing:
It states that there is no relation between the predictor and outcome variables in the population. H0 denoted it.
Example: There is no association between a patient’s BMI and diabetes.
It states that there is some relation between the predictor and outcome variables in the population. It is denoted by H1.
Example: There could be an association between a patient’s BMI and diabetes.
Univariate analysis is the simplest and easiest form of data analysis where the data being analyzed contains only one variable.
Example - Studying the heights of players in the NBA.
Univariate analysis can be described using Central Tendency, Dispersion, Quartiles, Bar charts, Histograms, Pie charts, and Frequency distribution tables.
The bivariate analysis involves the analysis of two variables to find causes, relationships, and correlations between the variables.
Example – Analyzing the sale of ice creams based on the temperature outside.
The bivariate analysis can be explained using Correlation coefficients, Linear regression, Logistic regression, Scatter plots, and Box plots.
The multivariate analysis involves the analysis of three or more variables to understand the relationship of each variable with the other variables.
Example – Analysing Revenue based on expenditure.
Multivariate analysis can be performed using Multiple regression, Factor analysis, Classification & regression trees, Cluster analysis, Principal component analysis, Dual-axis charts, etc.
In Excel, you can use the TODAY() and NOW() function to get the current date and time.
You can use the SUMIFS() function to find the total quantity.
For the Sales Rep column, you need to give the criteria as “A*” - meaning the name should start with the letter “A”. For the Cost each column, the criteria should be “>10” - meaning the cost of each item is greater than 10.
The result is 13.
The query stated above is incorrect as we cannot use the alias name while filtering data using the WHERE clause. It will throw an error.
Here is the correct SQL query:
The Union operator combines the output of two or more SELECT statements.
Syntax:
SELECT column_name(s) FROM table1
UNION
SELECT column_name(s) FROM table2;
Let’s consider the following example, where there are two tables - Region 1 and Region 2.
To get the unique records, we use Union.
The Intersect operator returns the common records that are the results of 2 or more SELECT statements.
Syntax:
SELECT column_name(s) FROM table1
INTERSECT
SELECT column_name(s) FROM table2;
The Except operator returns the uncommon records that are the results of 2 or more SELECT statements.
Syntax:
SELECT column_name(s) FROM table1
EXCEPT
SELECT column_name(s) FROM table2;
Below is the SQL query to return uncommon records from region 1.
Fig: Product Price table
select top 4 * from product_price order by mkt_price desc;
Now, select the top one from the above result that is in ascending order of mkt_price.
The SQL query is as follows:
The output of the query is as follows:
From the above map, it is clear that states like Washington, California, and New York have the highest sales and profits. While Texas, Pennsylvania, and Ohio have good amounts of sales but the least profits.
num = np.array([[1,2,3],[4,5,6],[7,8,9]]). Extract the value 8 using 2D indexing.
Since the value eight is present in the 2nd row of the 1st column, we use the same index positions and pass it to the array.
Since we only want the odd number from 0 to 9, you can perform the modulus operation and check if the remainder is equal to 1.
You can either use the concatenate() or the hstack() function to stack the arrays.
Suppose there is an emp data frame that has information about a few employees. Let’s add an Address column to that data frame.
Declare a list of values that will be converted into an address column.
Now, let’s head to the final section, i.e., the advanced level data analyst interview questions.
Fig: Products table
Fig: Sales order detail table
We can use an inner join to get records from both the tables. We’ll join the tables based on a common key column, i.e., ProductID.
The result of the SQL query is shown below.
The stored procedure is an SQL script that is used to run a task several times.
Let’s look at an example to create a stored procedure to find the sum of the first N natural numbers' squares.
Output: Display the sum of the square for the first four natural numbers
Here is the output to print all even numbers between 30 and 45.
Treemaps |
Heatmaps |
Treemaps are used to display data in nested rectangles. |
Heat maps can visualize measures against dimensions with the help of colors and size to differentiate one or more dimensions and up to two measures. |
You use dimensions to define the structure of the treemap, and measures to define the size or color of the individual rectangles. |
The layout is like a text table with variations in values encoded as colors. |
Treemaps are a relatively simple data visualization that can provide insight in a visually attractive format. |
In the heatmap, you can quickly see a wide array of information. |
Get broad exposure to key technologies and skills used in data analytics and data science, including statistics with the Post Graduate Program in Data Analytics.
To generate Random numbers using NumPy, we use the random.randint() function.
To find the unique values and number of unique elements, use the unique() and nunique() function.
Now, subset the data for Age<35 and Height>6.
Below is the result sine graph.
So, those were the 50 data analyst interview questions that can help you crack your interviews and help you become a data analyst.
Now that you know the different data analyst interview questions that can be asked in an interview, it is easier for you to crack for your interviews. Here, you looked at various data analyst interview questions based on the difficulty levels, tools, and programming languages.
We hope this article on data analyst interview questions is useful to you. Do you have any questions related to this article? If so, please put it in the comments section of the article, and our experts will get back to you on that right away.
Shruti is an engineer and a technophile. She works on several trending technologies. Her hobbies include reading, dancing and learning new languages. Currently, she is learning the Japanese language.
How To Become a Data Analyst?: A Step-by-Step Guide
Data Analyst Resume Guide
Business Analyst vs Data Analyst: Differences and Career Paths Explained
Program Preview: Post Graduate Program in Cloud Computing
Data Scientist vs Data Analyst vs Data Engineer: Job Role, Skills, and Salary
Business Intelligence Career Guide: Your Complete Guide to Becoming a Business Analyst