Gradient Descent for Linear Regression
In this post we’ll explore the use of gradient descent to determine our parameters for linear regression.
For simplicity’s sake we’ll use one feature variable.
We’ll start by how you might determine the parameters using a grid search, and then show how it’s done using gradient descent.
Let’s start with some imports of packages we’ll be using in this post:
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import cm
from mpl_toolkits.mplot3d import Axes3D
plt.style.use('ggplot')
We’re going to create a linear regression that follows the distribution:
$y = \beta_0 + \beta_1 x$
where:
$ \beta_0 = -50 $
$ \beta_1 = 2 $
Let’s start by creating some mock data and we’ll graph up this data.
np.random.seed(1)
X = np.random.randint(0, 100, 100)
# Target variable
# y = 2x - 50 + ϵ (noise)
y = 2*X - 50 + np.random.uniform(-15, 15, 100)
plt.scatter(X, y)
plt.xlabel("x")
plt.ylabel("y")
plt.show()
Our aim is to minimise the mean of squared errors, calculated as:
$$ MSE = \dfrac{1}{2n} \sum_{i=1}^{n} (\hat{y}_i – y_i)^2$$
Where $\hat{y}$ is our prediction of $y$ on the basis of our estimated $\beta_0$ and $\beta_1$ values:
$$ \hat{y} = \hat{\beta_0} + \hat{\beta_1}x $$
and n is our number of observations.
We could just cast a wide net and try lots of values for $\beta_0$ and $\beta_1$ and see which gives the lowest value for the MSE.
Let’s give this a go, trying each integer value between -100 and 100 for $\beta_0$ and $\beta_1$ and graphing our results:
# Try each integer for beta 0 and beta 1 between -100 and 100
beta_0 = beta_1 = np.arange(-100, 100, 1)
# All combinations of beta_0 and beta_1
plt_beta_0, plt_beta_1 = np.meshgrid(beta_0, beta_1)
def calculate_mse(beta_0, beta_1):
y_hat = beta_0 + beta_1 * X
error = y_hat - y
sse = np.sum(error ** 2)
return ((1 / (2 * len(X))) * sse)
calculate_mse_v = np.vectorize(calculate_mse)
mse = calculate_mse_v(plt_beta_0, plt_beta_1).reshape(len(plt_beta_0), len(plt_beta_1))
fig = plt.figure(figsize=(10, 5))
ax = fig.gca(projection='3d')
surf = ax.plot_surface(plt_beta_0,
plt_beta_1,
mse,
cmap=cm.jet,
linewidth=0,
antialiased=False)
ax.set_xlabel(r'$\hat{\beta_0}$')
ax.set_ylabel(r'$\hat{\beta_1}$')
ax.set_zlabel(r'MSE')
fig.colorbar(surf, shrink=0.5, aspect=5)
plt.show()
As you might expect, varying $\hat{\beta_1}$ has more of a pronounced effect on y than $\hat{\beta_0}$.
We’ll now grab the indexes of our values for $\hat{\beta_1}$ and $\hat{\beta_0}$ for which mse is lowest and graph our results:
# Get the index for the values of beta_0 and beta_1
# For which SSE is lowest
beta_1_idx, beta_0_idx = np.unravel_index(mse.argmin(), mse.shape)
# Retrieve values of beta_0 and beta_1 for which
# SSE is lowest
beta_0_hat = beta_0[beta_0_idx]
beta_1_hat = beta_1[beta_1_idx]
# Print model parameters
print("y = {} + {}x".format(beta_0_hat, beta_1_hat))
# Plot a line for our model
plt.scatter(X, y)
plt.plot(
[min(X), max(X)],
[min(X) * beta_1_hat + beta_0_hat, max(X) * beta_1_hat + beta_0_hat],
color='blue'
)
plt.xlabel('x')
plt.ylabel('y')
plt.show()
Our grid search has done well at approximating $\hat{\beta_1}$ and $\hat{\beta_0}$ for our 1-D feature set.
But this is very inefficient, and quickly becomes slower as you increase granularity and search space.
Instead, we’ll use gradient descent. The aim with gradient descent is to move down a gradient to reach the minimum based on the concave shape given by the cost function as shown in the 3D graph above.
Gradient descent is an iterative process:
- Initialise $\beta_0$ and $\beta_1$ with random values and calculate MSE
- Calculate the gradient so we move in the direction of minimising MSE
- Adjust the $\beta_0$ and $\beta_1$ with gradient
- Use new weights to get values for $\hat{y}$ to calculate MSE
- Repeat steps 2-4
We use an $\alpha$ term to determine how far down the gradient we should move. An $\alpha$ that is too small will take longer to converge on the minimum, while an $\alpha$ that’s too large can begin to diverge from the minimum value of the MSE.
We will use:
- $\alpha$ = 0.0005
- 100,000 iterations.
- Initialise $\beta_0$ with a value of 1
- Initialise $\beta_1$ with a value of 1
We’ll also collect the MSEs as we go along so we can graph our progress afterwards
# Make this an array of arrays with a
# dummy array of ones for beta_0
X = np.array([np.ones(len(X)), X])
X
alpha = 0.0005
iterations = 100000
beta_0_hat = 1
beta_1_hat = 1
mses = []
for i in range(1, iterations+1):
y_hat = beta_0_hat * X[0] + beta_1_hat * X[1]
error = y_hat - y
sse = np.sum(error ** 2)
mse = ((1 / (2 * len(X.T))) * sse)
mses.append(mse)
gradient = np.dot(X, error) / len(X.T)
beta_0_hat = beta_0_hat - (gradient[0] * alpha)
beta_1_hat = beta_1_hat - (gradient[1] * alpha)
if i % 10000 == 0:
print("Iteration {}, MSE={}, β0={}, β1={}".format(
i, round(mse, 3), round(beta_0_hat, 3), round(beta_1_hat, 3)))
So our final $\hat{\beta_0}$ value is -49.919 and our final $\hat{\beta_1}$ value is 1.968.
Let’s take a look how this looks when plotted:
plt.scatter(X[1], y)
plt.plot(
[min(X[1]), max(X[1])],
[min(X[1]) * beta_1_hat + beta_0_hat, max(X[1]) * beta_1_hat + beta_0_hat],
color='blue'
)
plt.xlabel('x')
plt.ylabel('y')
plt.show()
As we collected the MSE values as we iterated through the gradient descent steps, we can graph this and see how the MSE decays over time:
plt.figure(figsize=(10,5))
plt.plot(np.arange(1, iterations+1), mses, color='green')
plt.ylabel("MSE")
plt.xlabel("Iteration")
plt.show()
As we can see the majority of the decay occurs in the first 20,000 iterations. It would be interesting to see how this graph would change given different values of $\alpha$
I leave this as a task for the reader.