Home | Trees | Indices | Help |
---|
|
object --+ | properties.Function --+ | properties.AtomicFunction --+ | object --+ | | | properties.Gradient --+ | object --+ | | | properties.LipschitzContinuousGradient --+ | object --+ | | | properties.StepSize --+ | LogisticRegression
The Logistic Regression loss function. (Re-weighted) Log-likelihood (cross-entropy): * f(beta) = -Sum wi (yi log(pi) + (1 − yi) log(1 − pi)) = -Sum wi (yi xi' beta − log(1 + e(x_i'beta))), * grad f(beta) = -Sum wi[ xi (yi - pi)] + k beta, where pi = p(y=1 | xi, beta) = 1 / (1 + exp(-x_i'beta)) and wi is the weight for sample i. See [Hastie 2009, p.: 102, 119 and 161, Bishop 2006 p.: 206] for details. Parameters ---------- X : Numpy array (n-by-p). The regressor matrix. y : Numpy array (n-by-1). The regressand vector. weights: Numpy array (n-by-1). The sample's weights. mean : Boolean. Whether to compute the squared loss or the mean squared loss. Default is True, the mean squared loss.
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
Inherited from Inherited from Inherited from Inherited from |
|
|||
__abstractmethods__ =
|
|||
Inherited from Inherited from |
|
|||
Inherited from |
|
x.__init__(...) initializes x; see help(type(x)) for signature
|
Free any cached computations from previous use of this Function. From the interface "Function".
|
Function value at the point beta. From the interface "Function". Parameters ---------- beta : Numpy array. Regression coefficient vector. The point at which to evaluate the function.
|
Gradient of the function at beta. From the interface "Gradient". Parameters ---------- beta : Numpy array. The point at which to evaluate the gradient. Examples -------- >>> import numpy as np >>> from parsimony.functions.losses import LogisticRegression >>> >>> np.random.seed(42) >>> X = np.random.rand(100, 150) >>> y = np.random.randint(0, 2, (100, 1)) >>> lr = LogisticRegression(X=X, y=y, mean=True) >>> beta = np.random.rand(150, 1) >>> round(np.linalg.norm(lr.grad(beta) ... - lr.approx_grad(beta, eps=1e-4)), 10) 4e-10 >>> >>> np.random.seed(42) >>> X = np.random.rand(100, 150) >>> y = np.random.randint(0, 2, (100, 1)) >>> lr = LogisticRegression(X=X, y=y, mean=False) >>> beta = np.random.rand(150, 1) >>> round(np.linalg.norm(lr.grad(beta) ... - lr.approx_grad(beta, eps=1e-4)), 9) 3.9e-08
|
Lipschitz constant of the gradient. Returns the maximum eigenvalue of (1 / 4) * X'WX. From the interface "LipschitzContinuousGradient". Examples -------- >>> import numpy as np >>> from parsimony.functions.losses import LogisticRegression >>> >>> np.random.seed(42) >>> X = np.random.rand(10, 15) >>> y = np.random.randint(0, 2, (10, 1)) >>> lr = LogisticRegression(X=X, y=y, mean=True) >>> L = lr.L() >>> L_ = lr.approx_L((15, 1), 10000) >>> L >= L_ True >>> round((L - L_) / L, 15) 0.45110910457988 >>> lr = LogisticRegression(X=X, y=y, mean=False) >>> L = lr.L() >>> L_ = lr.approx_L((15, 1), 10000) >>> L >= L_ True >>> round((L - L_) / L, 13) 0.430306683612
|
The step size to use in descent methods. Parameters ---------- beta : Numpy array. The point at which to determine the step size.
|
Home | Trees | Indices | Help |
---|
Generated by Epydoc 3.0.1 on Mon Apr 6 23:52:11 2015 | http://epydoc.sourceforge.net |