Save Preloader image

0%

https://piyushtech.com/wp-content/uploads/2020/05/blog-header-bg-min-1.png

Latest News

Building the brands stronger and take you to
the next level of business!

Automatically Learning From Data – Logistic Regression With L2 Regularization in Python

By user
1

Logistic Regression

Logistic regression is used for binary classification issues — the place you’ve gotten some examples which can be “on” and different examples which can be “off.”  You get as enter a coaching set; which has some examples of every class together with a label saying whether or not every instance is “on” or “off”.  The objective is to study a mannequin from the coaching information with the intention to predict the label of latest examples that you have not seen earlier than and do not know the label of.

For one instance, suppose that you’ve information describing a bunch of buildings and earthquakes (E.g., yr the constructing was constructed, kind of fabric used, energy of earthquake,and many others), and you realize whether or not every constructing collapsed (“on”) or not (“off”) in every previous earthquake.  Utilizing this information, you’d wish to make predictions about whether or not a given constructing goes to break down in a hypothetical future earthquake.

One of many first fashions that might be value attempting is logistic regression.

Coding it up

I wasn’t engaged on this precise drawback, however I used to be engaged on one thing shut.  Being one to observe what I preach, I began in search of a lifeless easy Python logistic regression class.  The one requirement is that I wished it to assist L2 regularization (extra on this later).  I am additionally sharing this code with a bunch of different folks on many platforms, so I wished as few dependencies on exterior libraries as potential.

I could not discover precisely what I wished, so I made a decision to take a stroll down reminiscence lane and implement it myself.  I’ve written it in C++ and Matlab earlier than however by no means in Python.

I will not do the derivation, however there are many good explanations on the market to observe should you’re not afraid of a bit calculus.  Simply do some Googling for “logistic regression derivation.”  The large thought is to write down down the likelihood of the information given some setting of inside parameters, then to take the spinoff, which can inform you how you can change the inner parameters to make the information extra seemingly.  Obtained it?  Good.

For these of you on the market that know logistic regression in and out, check out how brief the prepare() methodology is.  I actually like how simple it’s to do in Python.

Regularization

I caught a bit oblique flak throughout March insanity season for speaking about how I regularized the latent vectors in my matrix-factorization mannequin of staff offensive and defensive strengths when predicting outcomes in NCAA basketball.  Apparently folks thought I used to be speaking nonsense — loopy, proper?

However significantly, guys — regularization is a good suggestion.

Let me drive house the purpose.  Check out the outcomes of operating the code (linked on the backside).

Check out the highest row.

On the left facet, you’ve gotten the coaching set.  There are 25 examples laid out alongside the x axis, and the y axis tells you if the instance is “on” (1) or “off” (0).  For every of those examples, there is a vector describing its attributes that I am not exhibiting.  After coaching the mannequin, I ask the mannequin to disregard the identified coaching set labels and to estimate the likelihood that every label is “on” based mostly solely on the examples’s description vectors and what the mannequin has realized (hopefully issues like stronger earthquakes and older buildings enhance the chance of collapse).  The possibilities are proven by the crimson X’s.  Within the high left, the crimson X’s are proper on high of the blue dots, so it is rather certain concerning the labels of the examples, and it is all the time appropriate.

Now on the fitting facet, now we have some new examples that the mannequin hasn’t seen earlier than.  That is referred to as the take a look at set.  That is basically the identical because the left facet, however the mannequin is aware of nothing concerning the take a look at set class labels (yellow dots).  What you see is that it nonetheless does an honest job of predicting the labels, however there are some troubling circumstances the place it’s very assured and really flawed.  This is called overfitting.

That is the place regularization is available in.  As you go down the rows, there may be stronger L2 regularization — or equivalently, stress on the inner parameters to be zero.  This has the impact of decreasing the mannequin’s certainty.  Simply because it will probably completely reconstruct the coaching set does not imply that it has all the things discovered.  You’ll be able to think about that should you had been counting on this mannequin to make vital selections, it could be fascinating to have at the least a little bit of regularization in there.

And here is the code.  It seems lengthy, however most of it’s to generate the information and plot the outcomes.  The majority of the work is finished within the prepare() methodology, which is simply three (dense) traces.  It requires numpy, scipy, and pylab.

* For full disclosure, I ought to admit that I generated my random information in a manner such that it’s prone to overfitting, presumably making logistic-regression-without-regularization look worse than it’s.

The Python Code

from scipy.optimize.optimize import fmin_cg, fmin_bfgs, fmin

import numpy as np

def sigmoid(x):

    return 1.0 / (1.0 + np.exp(-x))

class SyntheticClassifierData():

   

    def __init__(self, N, d):

        “”” Create N situations of d dimensional enter vectors and a 1D

        class label (-1 or 1). “””

       

        means = .05 * np.random.randn(2, d)

       

        self.X_train = np.zeros((N, d))

        self.Y_train = np.zeros(N)       

        for i in vary(N):

            if np.random.random() > .5:

                y = 1

            else:

                y = 0

            self.X_train[i, :] = np.random.random(d) + means[y, :]

            self.Y_train[i] = 2.0 * y – 1

       

        self.X_test = np.zeros((N, d))

        self.Y_test = np.zeros(N)       

        for i in vary(N):

            if np.random.randn() > .5:

                y = 1

            else:

                y = 0

            self.X_test[i, :] = np.random.random(d) + means[y, :]

            self.Y_test[i] = 2.0 * y – 1

class LogisticRegression():

    “”” A easy logistic regression mannequin with L2 regularization (zero-mean

    Gaussian priors on parameters). “””

    def __init__(self, x_train=None, y_train=None, x_test=None, y_test=None,

                 alpha=.1, artificial=False):

       

        # Set L2 regularization energy

        self.alpha = alpha

       

        # Set the information.

        self.set_data(x_train, y_train, x_test, y_test)

       

        # Initialize parameters to zero, for lack of a more sensible choice.

        self.betas = np.zeros(self.x_train.form[1])

    def negative_lik(self, betas):

        return -1 * self.lik(betas)

    def lik(self, betas):

        “”” Chance of the information beneath the present settings of parameters. “””

       

        # Information chance

        l = 0

        for i in vary(self.n):

            l += log(sigmoid(self.y_train[i] *

                             np.dot(betas, self.x_train[i,:])))

       

        # Prior chance

        for okay in vary(1, self.x_train.form[1]):

            l -= (self.alpha / 2.0) * self.betas[k]**2

           

        return l

    def prepare(self):

        “”” Outline the gradient and hand it off to a scipy gradient-based

        optimizer. “””

       

        # Outline the spinoff of the chance with respect to beta_k.

        # Must multiply by -1 as a result of we will probably be minimizing.

        dB_k = lambda B, okay : np.sum([-self.alpha * B[k] +

                                     self.y_train[i] * self.x_train[i, k] *

                                     sigmoid(-self.y_train[i] *

                                             np.dot(B, self.x_train[i,:]))

                                     for i in vary(self.n)]) * -1

       

        # The total gradient is simply an array of componentwise derivatives

        dB = lambda B : np.array([dB_k(B, k)

                                  for k in range(self.x_train.shape[1])])

       

        # Optimize

        self.betas = fmin_bfgs(self.negative_lik, self.betas, fprime=dB)

    def set_data(self, x_train, y_train, x_test, y_test):

        “”” Take information that is already been generated. “””

       

        self.x_train = x_train

        self.y_train = y_train

        self.x_test = x_test

        self.y_test = y_test

        self.n = y_train.form[0]

    def training_reconstruction(self):

        p_y1 = np.zeros(self.n)

        for i in vary(self.n):

            p_y1[i] = sigmoid(np.dot(self.betas, self.x_train[i,:]))

       

        return p_y1

    def test_predictions(self):

        p_y1 = np.zeros(self.n)

        for i in vary(self.n):

            p_y1[i] = sigmoid(np.dot(self.betas, self.x_test[i,:]))

       

        return p_y1

   

   

    def plot_training_reconstruction(self):

        plot(np.arange(self.n), .5 + .5 * self.y_train, ‘bo’)

        plot(np.arange(self.n), self.training_reconstruction(), ‘rx’)

        ylim([-.1, 1.1])

    def plot_test_predictions(self):

        plot(np.arange(self.n), .5 + .5 * self.y_test, ‘yo’)

        plot(np.arange(self.n), self.test_predictions(), ‘rx’)

        ylim([-.1, 1.1])

if __name__ == “__main__”:

    from pylab import *

    # Create 20 dimensional information set with 25 factors — this will probably be

    # prone to overfitting.

    information = SyntheticClassifierData(25, 20)

    # Run for a wide range of regularization strengths

    alphas = [0, .001, .01, .1]

    for j, a in enumerate(alphas):

      

        # Create a brand new learner, however use the identical information for every run

        lr = LogisticRegression(x_train=information.X_train, y_train=information.Y_train,

                                x_test=information.X_test, y_test=information.Y_test,

                                alpha=a)

       

        print “Preliminary chance:”

        print lr.lik(lr.betas)

       

        # Practice the mannequin

        lr.prepare()

       

        # Show execution information

        print “Closing betas:”

        print lr.betas

        print “Closing lik:”

        print lr.lik(lr.betas)

       

        # Plot the outcomes

        subplot(len(alphas), 2, 2*j + 1)

        lr.plot_training_reconstruction()

        ylabel(“Alpha=%s” % a)

        if j == 0:

            title(“Coaching set reconstructions”)

       

        subplot(len(alphas), 2, 2*j + 2)

        lr.plot_test_predictions()

        if j == 0:

            title(“Check set predictions”)

    present()



python examplesSource by Daniel Tarlow#Routinely #Studying #Information #Logistic #Regression #Regularization #Python

Add Comment