Before getting into Bayesian Linear Regression, let us understand what Linear Regression is.

To demonstrate the relationship between two components, linear regression fits a straight condition to observed data. One variable is seen as illustrative, while the other is seen as necessary. For instance, solid modeling using a straight recurrence model must connect many people to their monuments.

Now that we know what Linear Regression is, we will learn about Bayesian Linear Regression, its real-life application, its advantages and disadvantages, and implement it using Python.

Become a Data Scientist with Hands-on Training!

Data Scientist Master’s ProgramExplore Program
Become a Data Scientist with Hands-on Training!

What Is Bayesian Linear Regression?

In Bayesian linear regression, the mean of one parameter is characterized by a weighted sum of other variables. This type of conditional modeling aims to determine the prior distribution of the regressors as well as other variables describing the allocation of the regressand) and eventually permits the out-of-sample forecasting of the regressand conditional on observations of the regression coefficients. 

The normal linear equation, where the distribution of display style YY given by display style XX is Gaussian, is the most basic and popular variant of this model. The future can be determined analytically for this model, and a specific set of prior probabilities for the parameters is known as conjugate priors. The posteriors usually have more randomly selected priors.

When the dataset has too few or poorly dispersed data, Bayesian Regression might be quite helpful. In contrast to conventional regression techniques, where the output is only derived from a single number of each attribute, a Bayesian Regression model's output is derived from a probability distribution. 

The result, "y," is produced by a normal distribution (where the variance and mean are normalized). The goal of the Bayesian Regression Model is to identify the 'posterior' distribution again for model parameters rather than the model parameters themselves. The model parameters will be expected to follow a distribution in addition to the output y. 

The posterior expression is given below:

Posterior = (Likelihood * Prior)/Normalization

The expression parameters are explained below:

  • Posterior: It is the likelihood that an event, such as H, will take place given the occurrence of another event, such as E, i.e., P(H | E).
  • Likelihood: It is a likelihood function in which a marginalization parameter variable is used.
  • Priority: This refers to the likelihood that event H happened before event A, i.e., P(H) (H)

This is the same as Bayes' Theorem, which states the following -

P(A|B) = (P(B|A) P(A))/P(B)

P(A) is the likelihood that event A will occur, while P(A|B) is the likelihood that event A will occur, provided that event B has already occurred. Here, A and B seem to be events. P(B), the likelihood of event B happening cannot be zero because it already has.

According to the aforementioned formula, we get a prior probability for the model parameters that is proportional to the probability of the data divided by the posterior distribution of the parameters, unlike Ordinary Least Square (OLS), which is what we observed in the case of the OLS. 

The value of probability will rise as more data points are collected and eventually surpass the previous value. The parameter values converge to values obtained by OLS in the case of an unlimited number of data points. Consequently, we start our regression method with an estimate (the prior value). 

As we begin to include additional data points, the accuracy of our model improves. Therefore, to make a Bayesian Ridge Regression model accurate, a considerable amount of train data is required.

Let's quickly review the mathematical side of the situation now. If 'y' is the expected value in a linear model, then

y(w,x) = w0+w1x1+...+wpxp

where, The vector "w" is made up of the elements w0, w1,... The weight value is expressed as 'x'.

w=(w1…wp)

As a result, the output "y" is now considered to be the Gaussian distribution around Xw for Bayesian Regression to produce a completely probabilistic model, as demonstrated below:

p(y|X, w. 𝛼) = N(y|Xw, 𝛼)

where the Gamma distribution prior hyper-parameter alpha is present. It is handled as a probability calculated from the data. The Bayesian Ridge Regression implementation is provided below. 

The Bayesian Ridge Regression formula on which it is based is as follows:

p(y|λ)=N(w|0, λ^-1Ip)

where alpha is the Gamma distribution's shape parameter before the alpha parameter and lambda is the distribution's shape parameter before the lambda parameter.

We have discussed Bayesian Linear Regression so, let us now discuss some of its real-life applications.

Real-life Application Of Bayesian Linear Regression

Some of the real-life applications of Bayesian Linear Regression are given below:

  • Using Priors: Consider a scenario in which your supermarkets carry a new product, and we want to predict its initial Christmas sales. For the new product's Christmas effect, we may merely use the average of comparable things as a previous one. 

Additionally, once we obtain data from the new item's initial Christmas sales, the previous is immediately updated. As a result, the forecast for the next Christmas is influenced by both the prior and the new item's data.

  • Regularize Priors: With the season, day of the week, trend, holidays, and a tonne of promotion indicators, our model is severely over-parameterized. Therefore regularization is crucial to keep the forecasts in check.

Since we got an idea regarding the real-life applications of Bayesian Linear Regression, we will now learn about its advantages and disadvantages.

Become a Data Scientist with Hands-on Training!

Data Scientist Master’s ProgramExplore Program
Become a Data Scientist with Hands-on Training!

Advantages Of Bayesian Regression

Some of the main advantages of Bayesian Regression are defined below:

  • Extremely efficient when the dataset is tiny.
  • Particularly well-suited for online learning as opposed to batch learning, when we know the complete dataset before we begin training the model. This is so that Bayesian Regression can be used without having to save data.
  • The Bayesian technique has been successfully applied and is quite strong mathematically. Therefore, using this requires no additional prior knowledge of the dataset.

Let us now look at some disadvantages of Bayesian Regression.

Disadvantages Of Bayesian Regression

Some common disadvantages of using Bayesian Regression:

  • The model's inference process can take some time.
  • The Bayesian strategy is not worthwhile if there is a lot of data accessible for our dataset, and the regular probability approach does the task more effectively.

After going through the definitions, applications, and advantages and disadvantages of Bayesian Linear Regression, it is time for us to explore how to implement Bayesian Regression using Python.

Implementation Of Bayesian Regression Using Python

We shall apply Bayesian Ridge Regression in this example. The Bayesian method, however, can be used in any regression technique, including regression analysis, lasso regression, etc. To implement Probabilistic Ridge Regression, we'll use the sci-kit-learn library. 

We'll make use of the Boston Housing dataset, which includes details on the average price of homes in various Boston neighborhoods. 

The r2 score will be used for evaluation. The r2 score should be as high as 1.0. The value of the r2 score is zero if the model predicts consistently independent of the attributes. Even inferior models may have a negative r2 score. 

However, before we begin the coding, you must comprehend the crucial components of a Bayesian Ridge Regression model:

  • n_iter: Quantity of iterations. The default value is 100.
  • tol: How to know when to end the procedure after the model converges. 1e-3 is the default value.
  • alpha_1: Alpha parameter over the Gamma distribution shape parameter of a regressor line. 1e-6 is the default value.
  • alpha_2: Gamma distribution's inverse scale parameter relative to the alpha parameter. 1e-6 is the default value.
  • lambda_1: Gamma distribution's shape parameter relative to lambda. 1e-6 is the default value.
  • lambda_2: Gamma distribution's inverse scale parameter over the lambda variable. 1e-6 is the default value.

Let us now implement this using Python3.

Program

from sklearn.datasets import load_boston

from sklearn.model_selection import train_test_split

from sklearn.metrics import r2_score

from sklearn.linear_model import BayesianRidge

# Loading the dataset

dataset = load_boston()

X, y = dataset.data, dataset.target

# Splitting the dataset into testing and training sets

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.15, random_state = 42)

# Creating to train the model

model = BayesianRidge()

model.fit(X_train, y_train)

# Model predicting the test data

prediction = model.predict(X_test)

# Evaluation of r2 score of the model against the test dataset

print(f"Test Set r2 score : {r2_score(y_test, prediction)}")

Output

Test Set r2 score : 0.7943355984883815

Learn More About Bayesian Linear Regression With Simplilearn

In this article, we discussed Bayesian Linear Regression, explored a real-life application of it, and also dived into the various advantages and disadvantages of the same. We also learned how to implement the Bayesian Linear Regression model and its parameters using Python3.

To learn about such concepts in a comprehensive manner to upskill yourself and grow your career, check out Simplilearn’s Caltech Data Science Bootcamp today!

Become a Data Scientist with Hands-on Training!

Data Scientist Master’s ProgramExplore Program
Become a Data Scientist with Hands-on Training!

FAQs

1. What does Bayesian regression do?

The goal of Bayesian Linear Regression is to ascertain the prior probability for the model parameters rather than to identify the one "best" value of the model parameters.

2. What are some advantages of using Bayesian linear regression?

The main benefit is that, unlike with traditional regression, where you only get a confidence interval and a point estimate, with this Bayesian processing, you get the full range of inferential solutions.

3. How do you do Bayesian regression?

We follow three stages to perform Bayes linear regression: In order to describe our assumptions on the generation of the data and parameters, we established a probabilistic model. By computing the posterior probabilistic model across the parameters, we carry out inference for the parameters.

4. Is Bayesian linear regression Parametric?

It is a parametric Bayesnet process if you only use regression analysis (inverse gaussian prior on remaining variables for regressions between nodes).

5. What are the assumptions of Bayesian regression?

The following are the presumptions that we make based on our defined probabilistic model:

  • The model is linear
  • The variables are i.i.d.
  • The variance σ2 is the same for every nth observation, resulting in homoscedasticity
  • The likelihood (or noise in the first formulation) follows the normal distribution, and we should not anticipate seeing heavy tails, among other things.

6. What are the disadvantages of Bayesian regression?

The model's inference process can take some time. The Bayesian strategy is not worthwhile if there is a lot of data available to our dataset, and the normal frequentist approach does the task more effectively.

Data Science & Business Analytics Courses Duration and Fees

Data Science & Business Analytics programs typically range from a few weeks to several months, with fees varying based on program and institution.

Program NameDurationFees
Post Graduate Program in Data Analytics

Cohort Starts: 6 May, 2024

8 Months$ 3,749
Post Graduate Program in Data Science

Cohort Starts: 28 May, 2024

11 Months$ 4,199
Caltech Post Graduate Program in Data Science

Cohort Starts: 29 May, 2024

11 Months$ 4,500
Applied AI & Data Science

Cohort Starts: 18 Jun, 2024

3 Months$ 2,624
Data Analytics Bootcamp

Cohort Starts: 24 Jun, 2024

6 Months$ 8,500
Data Scientist11 Months$ 1,449
Data Analyst11 Months$ 1,449