In the previous lesson, we learned the mathematical definition of a gradient. We saw that the gradient of a function was a combination of our partial derivatives with respect to each variable of that function. We saw the direction of gradient descent was simply to move in the negative direction of the gradient. For example, if the direction of ascent of a function is a move up and to the right, the descent is down and to the left. In this lesson, we will apply gradient descent to our cost function to see how we can move towards a best fit regression line by changing variables of
You will be able to:
- Represent RSS as a multivariable function and take partial derivatives to perform gradient descent
Think about why gradient descent applies so well to a cost function. Initially, we said that the cost of our function, meaning the difference between what our regression line predicted and the dataset, changed as we altered the y-intercept or the slope of the function.
Remember that mathematically, when we say cost function, we use the residual sum of squares where
$$
\begin{align}
RSS &= \sum_{i=1}^n(actual - expected)^2 \
&= \sum_{i=1}^n(y_i - \hat{y})^2 \
&= \sum_{i=1}^n(y_i - (mx_i + b))^2
\end{align}
$$
for all
And RSS takes the difference between
import numpy as np
import matplotlib.pyplot as plt
def regression_formula(x):
return 1.8 + 46*x
np.random.seed(225)
x = np.random.rand(30,1).reshape(30)
y_randterm = np.random.normal(0,3,30)
y = 3+ 50* x + y_randterm
plt.figure(figsize=(7,5))
plt.plot(x, y, '.b')
plt.plot(x, regression_formula(x), '-', color="red")
plt.xlabel("x", fontsize=14)
plt.ylabel("y", fontsize=14);
And we when we just plotted how RSS changes as we change one variable of our regression line,
def errors(x_values, y_values, m, b):
y_line = (b + m*x_values)
return (y_values - y_line)
def squared_errors(x_values, y_values, m, b):
return np.round(errors(x_values, y_values, m, b)**2, 2)
def residual_sum_squares(x_values, y_values, m, b):
return round(sum(squared_errors(x_values, y_values, m, b)), 2)
table = np.zeros((20,2))
for idx, val in enumerate(range(40, 60)):
table[idx,0] = val
table[idx,1] = residual_sum_squares(x, y, val, 1.319)
plt.figure(figsize=(10,7))
plt.plot(table[:,0], table[:,1], '-')
plt.xlabel("m-values", fontsize=14)
plt.ylabel("RSS", fontsize=14)
plt.title("RSS with changes to slope", fontsize=16);
In two dimensions, we decrease our RSS simply by moving forwards or backwards along the cost curve which is the equivalent of changing our variable, in this case y-intercept. So the cost curve above indicates that changing the regression line from having a
Allowing us to change both variables,
Because the RSS is a function of how we change our values of
$$
\begin{align}
J(m, b) & = \sum_{i=1}^{n}(y_i - \hat{y})^2\
J(m, b) & = \sum_{i=1}^{n}(y_i - (mx_i + b))^2 &&\text{notice
In the function above,
Just our other multivariable functions we have seen thus far, we can display it in three dimensions, and it looks like the following.
The three-dimensional graph above shows how the cost associated with our regression line changes as the slope and y-intercept values are changed.
Let's explore using gradient descent to determine how to change our regression line when we can alter both
Now that we are applying gradient descent to our cost curve
$J(m, b)$ , the technique should answer how much to move the$m$ variable and the$b$ variable to produce the greatest decrease in cost, or RSS. In other words, when altering our regression line, we want to know how much of this change should be derived from a move in the slope versus how much should be derived from a change in the y-intercept.
As we know, the gradient of a function is simply the partial derivatives with respect to each of the variables, so:
In calculating the partial derivatives of our function
Ok, so let's take our partial derivatives of the following:
You'll notice that taking the partial derivatives of these functions is not that simple. You've learned a set of rules about derivatives in this section, but if you want to take these two derivatives successfully, you'll need to use the chain rule. The chain rule is a formula to compute the derivative of a composite function.
The chain rule basically tells you how to compute the derivative of nearly any function (or any "molecular" function built up from fundamental "atomic" functions like
In this lesson, we don't want to get lost in the details of the chain rule and how to get to the the partial derivatives needed in this lesson, but in the appendix, you can find:
- a lesson on the chain rule in general
- A lesson that walks you through the details of finding the partial derivatives in this lesson.
The resulting partial derivatives are:
Ok, so now we have our two partial derivatives for
$$ \frac{dJ}{dm}J(m,b) = -2x(y - (mx + b )) $$ $$ \frac{dJ}{db}J(m,b) = -2(y - (mx + b)) $$
And as
$$ \frac{dJ}{dm}J(m,b) = -2x(y - \hat{y}) = -2x\epsilon$$ $$ \frac{dJ}{db}J(m,b) = -2*(y - \hat{y}) = -2\epsilon$$
Remember, error
= actual
- expected
, so we can replace
$$ \frac{dJ}{dm}J(m,b) = -2*\sum_{i=1}^n x(y_i - \hat{y}i) = -2*\sum{i=1}^n x_i*\epsilon_i$$ $$ \frac{dJ}{db}J(m,b) = -2*\sum_{i=1}^n(y_i - \hat{y}i) = -2*\sum{i=1}^n \epsilon_i$$
So that is what what we'll do to find the "best fit regression line." We'll start with an initial regression line with values of
In the context of gradient descent, we use these partial derivatives to take a step size. Remember that our step should be in the opposite direction of our partial derivatives as we are descending towards the minimum. So to take a step towards gradient descent we use the general formula of:
current_m
= old_m
$ - \frac{dJ}{dm}J(m,b)$
current_b
= old_b
$ - \frac{dJ}{db}J(m,b) $
or in the code that we just calculated:
current_m
= old_m
$ - (-2*\sum_{i=1}^n x_i*\epsilon_i )$
current_b
= old_b
$ - ( -2*\sum_{i=1}^n \epsilon_i )$
In the next lesson, we'll work through translating this technique, with use of our
import numpy as np
import matplotlib.pyplot as plt
def regression_formula(x):
return 1.319 + 52*x
np.random.seed(225)
x = np.random.rand(30,1).reshape(30)
y_randterm = np.random.normal(0,3,30)
y = 3+ 50* x + y_randterm
plt.figure(figsize=(7,5))
plt.plot(x, y, '.b')
plt.plot(x, regression_formula(x), '-', color="red")
plt.xlabel("x", fontsize=14)
plt.ylabel("y", fontsize=14);
In this section, we developed some intuition for why the gradient of a function is the direction of steepest ascent and the negative gradient of a function is the direction of steepest decent. Essentially, the gradient uses the partial derivatives to see what change will result from changes in the function's dimensions, and then moves in that direction weighted more towards the partial derivative with the larger magnitude.
We also practiced calculating some gradients, and ultimately calculated the gradient for our cost function. This gave us two formulas that tell us how to update our regression line so that it descends along our cost function and approaches a "best fit line".