Package parsimony :: Package functions :: Module losses :: Class LogisticRegression
[hide private]
[frames] | no frames]

Class LogisticRegression

source code

                        object --+        
                                 |        
               properties.Function --+    
                                     |    
             properties.AtomicFunction --+
                                         |
                            object --+   |
                                     |   |
                   properties.Gradient --+
                                         |
                            object --+   |
                                     |   |
properties.LipschitzContinuousGradient --+
                                         |
                            object --+   |
                                     |   |
                   properties.StepSize --+
                                         |
                                        LogisticRegression

The Logistic Regression loss function.

(Re-weighted) Log-likelihood (cross-entropy):
  * f(beta) = -Sum wi (yi log(pi) + (1 − yi) log(1 − pi))
            = -Sum wi (yi xi' beta − log(1 + e(x_i'beta))),

  * grad f(beta) = -Sum wi[ xi (yi - pi)] + k beta,

where pi = p(y=1 | xi, beta) = 1 / (1 + exp(-x_i'beta)) and wi is the
weight for sample i.

See [Hastie 2009, p.: 102, 119 and 161, Bishop 2006 p.: 206] for details.

Parameters
----------
X : Numpy array (n-by-p). The regressor matrix.

y : Numpy array (n-by-1). The regressand vector.

weights: Numpy array (n-by-1). The sample's weights.

mean : Boolean. Whether to compute the squared loss or the mean squared
        loss. Default is True, the mean squared loss.

Instance Methods [hide private]
 
__init__(self, X, y, weights=None, mean=True)
x.__init__(...) initializes x; see help(type(x)) for signature
source code
 
reset(self)
Free any cached computations from previous use of this Function.
source code
 
f(self, beta)
Function value at the point beta.
source code
 
grad(self, beta)
Gradient of the function at beta.
source code
 
L(self)
Lipschitz constant of the gradient.
source code
 
step(self, beta, index=0)
The step size to use in descent methods.
source code

Inherited from properties.Function: get_params, set_params

Inherited from properties.Gradient: approx_grad

Inherited from properties.LipschitzContinuousGradient: approx_L

Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __repr__, __setattr__, __sizeof__, __str__, __subclasshook__

Class Variables [hide private]
  __abstractmethods__ = frozenset([])

Inherited from properties.AtomicFunction: __metaclass__

Properties [hide private]

Inherited from object: __class__

Method Details [hide private]

__init__(self, X, y, weights=None, mean=True)
(Constructor)

source code 

x.__init__(...) initializes x; see help(type(x)) for signature

Overrides: object.__init__
(inherited documentation)

reset(self)

source code 

Free any cached computations from previous use of this Function.

From the interface "Function".

Overrides: properties.Function.reset

f(self, beta)

source code 
Function value at the point beta.

From the interface "Function".

Parameters
----------
beta : Numpy array. Regression coefficient vector. The point at which
        to evaluate the function.

Overrides: properties.Function.f

grad(self, beta)

source code 
Gradient of the function at beta.

From the interface "Gradient".

Parameters
----------
beta : Numpy array. The point at which to evaluate the gradient.

Examples
--------
>>> import numpy as np
>>> from parsimony.functions.losses import LogisticRegression
>>>
>>> np.random.seed(42)
>>> X = np.random.rand(100, 150)
>>> y = np.random.randint(0, 2, (100, 1))
>>> lr = LogisticRegression(X=X, y=y, mean=True)
>>> beta = np.random.rand(150, 1)
>>> round(np.linalg.norm(lr.grad(beta)
...       - lr.approx_grad(beta, eps=1e-4)), 10)
4e-10
>>>
>>> np.random.seed(42)
>>> X = np.random.rand(100, 150)
>>> y = np.random.randint(0, 2, (100, 1))
>>> lr = LogisticRegression(X=X, y=y, mean=False)
>>> beta = np.random.rand(150, 1)
>>> round(np.linalg.norm(lr.grad(beta)
...       - lr.approx_grad(beta, eps=1e-4)), 9)
3.9e-08

Overrides: properties.Gradient.grad

L(self)

source code 
Lipschitz constant of the gradient.

Returns the maximum eigenvalue of (1 / 4) * X'WX.

From the interface "LipschitzContinuousGradient".

Examples
--------
>>> import numpy as np
>>> from parsimony.functions.losses import LogisticRegression
>>>
>>> np.random.seed(42)
>>> X = np.random.rand(10, 15)
>>> y = np.random.randint(0, 2, (10, 1))
>>> lr = LogisticRegression(X=X, y=y, mean=True)
>>> L = lr.L()
>>> L_ = lr.approx_L((15, 1), 10000)
>>> L >= L_
True
>>> round((L - L_) / L, 15)
0.45110910457988
>>> lr = LogisticRegression(X=X, y=y, mean=False)
>>> L = lr.L()
>>> L_ = lr.approx_L((15, 1), 10000)
>>> L >= L_
True
>>> round((L - L_) / L, 13)
0.430306683612

Overrides: properties.LipschitzContinuousGradient.L

step(self, beta, index=0)

source code 
The step size to use in descent methods.

Parameters
----------
beta : Numpy array. The point at which to determine the step size.

Overrides: properties.StepSize.step