In this lab you'll perform a full linear regression analysis on the data. You'll implement the process demonstrated in the previous lesson, taking a stepwise approach to analyze and improve the model along the way.
You will be able to:
- Perform a full linear regression with iterations based on p-value of features and other parameters
- Create visualizations to better understand the distributions of variables in a dataset
- Determine whether or not the assumptions for linear regression hold true for this example
To start, load the data and create an initial regression model to model the list_price
using all of your available features.
Note: In order to write the model you'll have to do some tedious manipulation of your column names. Statsmodels will not allow you to have spaces, apostrophe or arithmetic symbols (+) in your column names. Preview them and refine them as you go.
If you receive an error such as "PatsyError: error tokenizing input (maybe an unclosed string?)", then you need to further preprocess your column names.
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
sns.set_style('darkgrid')
from statsmodels.formula.api import ols
from statsmodels.stats.outliers_influence import variance_inflation_factor
import statsmodels.api as sm
import scipy.stats as stats
import warnings
warnings.simplefilter('ignore', FutureWarning)
warnings.simplefilter('ignore', RuntimeWarning)
warnings.simplefilter('ignore', UserWarning)
# Import the dataset 'Lego_dataset_cleaned.csv'
df = None
# Your code here - Manipulate column names
# Your code here - Define the target and predictors
# Your code here - Fit the actual model
Based on the initial model, remove those features which do not appear to be statistically relevant and rerun the model.
# Your code here - Remove features which do not appear to be statistically relevant
# Your code here - Refit the model
Comment: You should see that the model performance is identical. Additionally, observe that there are further features which have been identified as unimpactful. Continue to refine the model accordingly.
# Your code here - Continue to refine the model
# Your code here - Refit the model
There are still a lot of features in the current model! Chances are there are some strong multicollinearity issues. Begin to investigate the extent of this problem.
# Your code here - Code a way to identify multicollinearity
Once again, subset your features based on your findings above. Then rerun the model once again.
# Your code here - Subset features based on multicollinearity
# Your code here - Refit model with subset features
Check whether the normality assumption holds for your model.
# Your code here - Check that the residuals are normally distributed
Check whether the model's errors are indeed homoscedastic or if they violate this principle and display heteroscedasticity.
# Your code here - Check that the residuals are homoscedastic
Comment: This displays a fairly pronounced 'funnel' shape: errors appear to increase as the
list_price
increases. This doesn't bode well for our model. Subsetting the data to remove outliers and confining the model to this restricted domain may be necessary. A log transformation or something equivalent may also be appropriate.
From here, make additional refinements to your model based on the above analysis. As you progress, continue to go back and check the assumptions for the updated model. Be sure to attempt at least two additional model refinements.
Comment: Based on the above plots, it seems as though outliers are having a substantial impact on the model. As such, removing outliers may be appropriate. Investigating the impact of a log transformation is also worthwhile.
# Your code here - Check for outliers
# Your code here
# Remove extreme outliers
# Rerun the model
# Your code here - Check normality assumption
# Your code here - Check the Homoscedasticity Assumption
# Your code goes here
Well done! As you can see, regression can be a challenging task that requires you to make decisions along the way, try alternative approaches, and make ongoing refinements. These choices depend on the context and specific use cases.