Coding Your Own Linear Regression Model

One task that you will almost certainly be required to do other data science courses (especially if you are a MIDS student) is to write up some of your statistical / machine learning models from scratch. This can be a very valuable exercise, as it ensures that you understand what is actually going on behind the scenes of the models you use ever day, and that you don’t just think of them as “black boxes”.

To get a little practice doing this, today you will be coding up your own linear regression model!

(If you are using this site but aren’t actually in this class, you are welcome to skip this exercise if you’d like – this is more about practicing Python in anticipation of the requirements of other courses than developing your applied data science skills.)

There are, broadly speaking, two approaches you can take to coding up your own model:

  1. you can write the model by defining a new function, or

  2. you can write the model by defining a new class with associated methods (making a model that works the way a model works in scikit-learn).

Whether you do 1 or 2 is very much a matter of choice and style. Approach one, for example, is more consistent with what is called a functional style of programming, while approach two is more consistent with an object-oriented style of programming. Python can readily support both approaches, so either would work fine.

In these exercises, however, I will ask you to use approach number 2 as this tends to be the more difficult approach, and so practicing approach 2 will be extra useful in preparing you for other classes (HA! Pun…). In particular, our goal is to implement a linear regression model that has the same “initialize-fit-predict-score” API (application programming interface – a fancy name for the methods a class supports) as scikit-learn models. That means your model should be able to do all of the following:

  1. Initialize a new model.

  2. Fit a linear model when given a numpy vector (y) and a numpy matrix (X) with the syntax my_model.fit(X, y).

  3. Predict values when given a new numpy matrix (X_test) with the syntax my_model.predict(X_test).

  4. Return the model coefficients through the property my_model.coefficients (not quite what is used in sklearn, but let’s use that interface).

Also, bear in mind that throughout these exercises, we’ll be working in numpy instead of pandas, just as we do in scikit-learn. So assume that before using your model, your user has already converted their data from pandas into numpy arrays.

(1) Define a new Class called MyLinearModel with methods for __init__, fit, predict, and an attribute for coefficients. For now, we don’t need any initialization arguments, just an __init__ function.

As you get your code outline going, start by just having each method pass:

def my_method(self):
    pass

This will allow your methods to run without errors (they just don’t do anything). Then we can double back to each method to get them working one by one.

(2) Now define your fit method. This is the method that should actually run your linear regression. In case you’ve forgotten your linear algebra, remember that for linear regressions, \(\beta = (X'X)^{-1}X'Y\), a fact you can see explained in detail on page four here.

Note that once you have written the code to do a linear regression, you’ll need to put your outputs (your coefficents) somewhere. I recommend making an attribute for your class where you can store your coefficients.

(As a reminder: the normal multiply operator (*) in numpy implies scalar multiplication. Use @ for matrix multiplication).

HINT: Remember that linear regressions require a vector of 1s in the X matrix. As the package writer, you get to decide whether users are expected to provide a matrix X that already has a vector of 1s, or whether you expect the user to provide a matrix X that doesn’t have a vector of 1s (in which case you will need to add a vector of 1s before you fit the model).

(3) As you write code, it is good to test your code as you work. With that in mind, let’s create some toy data. First, create a 100 x 2 matrix where each column is normally distributed. Then create a vector y that is a linear combination of those two columns plus a vector of normally distributed noise and a constant term.

In other words, we want to create data where we know exactly what coefficients we should see so when we test our regression, we know if the results are accurate!

(4) Now test whether you fit method generates the correct coefficients. Remember the choice you made in Question 2 about whether your package expects the users’ X matrix to include a vector of 1s when you test!

(5) Now let’s make the statisticians proud, and in addition to storing the coefficients, let’s store the standard errors for our estimated coefficients as another attribute. Recall that the simplest method of calculating the variance covariance matrix for \(\beta\) is using the formula \(\sigma^2 (X'X)^{-1}\), where \(\sigma^2\) is the variance of the error terms of your regression. The standard errors for your coefficient estimates will be the diagonal values of that matrix. See page six here for a full derivation.

(6) Now let’s also add an R-squared attribute to the model.

(7) Now we’ll go ahead and cheat a little. Use statsmodels to fit your model with your toy data to ensure your standard errors and r-squared are correct!

(8) Now implement predict! Then test it against your original X data – do you get back something very close to your true y?

(9) Finally, create the option of fitting the model with or without a constant term. In other words, create an option so that, if the user passes a numpy array without a constant term, your code will add a vector of 1s before fitting the model. As in scikit-learn, make this an option you set during initialization.