Home | Trees | Indices | Help |
---|
|
object --+ | properties.Function --+ | properties.CompositeFunction --+ | object --+ | | | properties.Gradient --+ | object --+ | | | properties.LipschitzContinuousGradient --+ | object --+ | | | properties.StepSize --+ | RidgeLogisticRegression
The Logistic Regression loss function with a squared L2 penalty. Ridge (re-weighted) log-likelihood (cross-entropy): * f(beta) = -loglik + k/2 * ||beta||^2_2 = -Sum wi (yi log(pi) + (1 − yi) log(1 − pi)) + k/2*||beta||^2_2 = -Sum wi (yi xi' beta − log(1 + e(xi' beta))) + k/2*||beta||^2_2 * grad f(beta) = -Sum wi[ xi (yi - pi)] + k beta pi = p(y=1|xi, beta) = 1 / (1 + exp(-xi' beta)) wi: sample i weight [Hastie 2009, p.: 102, 119 and 161, Bishop 2006 p.: 206]
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
Inherited from Inherited from Inherited from Inherited from |
|
|||
__abstractmethods__ =
|
|||
Inherited from Inherited from |
|
|||
Inherited from |
|
Parameters ---------- X : Numpy array (n-by-p). The regressor matrix. Training vectors, where n is the number of samples and p is the number of features. y : Numpy array (n-by-1). The regressand vector. Target values (class labels in classification). k : Non-negative float. The ridge parameter. weights: Numpy array (n-by-1). The sample's weights. penalty_start : Non-negative integer. The number of columns, variables etc., to except from penalisation. Equivalently, the first index to be penalised. Default is 0, all columns are included. mean : Boolean. Whether to compute the mean loss or not. Default is True, the mean loss is computed.
|
Free any cached computations from previous use of this Function. From the interface "Function".
|
Function value of Logistic regression at beta. Parameters ---------- beta : Numpy array. Regression coefficient vector. The point at which to evaluate the function.
|
Gradient of the function at beta. From the interface "Gradient". Parameters ---------- beta : Numpy array. The point at which to evaluate the gradient. Examples -------- >>> import numpy as np >>> from parsimony.functions.losses import RidgeLogisticRegression >>> >>> np.random.seed(42) >>> X = np.random.rand(100, 150) >>> y = np.random.rand(100, 1) >>> y[y < 0.5] = 0.0 >>> y[y >= 0.5] = 1.0 >>> rr = RidgeLogisticRegression(X=X, y=y, k=2.71828182, mean=True) >>> beta = np.random.rand(150, 1) >>> round(np.linalg.norm(rr.grad(beta) ... - rr.approx_grad(beta, eps=1e-4)), 11) < 1e-9 True >>> >>> np.random.seed(42) >>> X = np.random.rand(100, 150) >>> y = np.random.rand(100, 1) >>> y[y < 0.5] = 0.0 >>> y[y >= 0.5] = 1.0 >>> rr = RidgeLogisticRegression(X=X, y=y, k=2.71828182, mean=False) >>> beta = np.random.rand(150, 1) >>> np.linalg.norm(rr.grad(beta) ... - rr.approx_grad(beta, eps=1e-4)) < 5e-8 True
|
Lipschitz constant of the gradient. Returns the maximum eigenvalue of (1 / 4) * X'WX. From the interface "LipschitzContinuousGradient".
|
The step size to use in descent methods. Parameters ---------- beta : Numpy array. The point at which to determine the step size.
|
Home | Trees | Indices | Help |
---|
Generated by Epydoc 3.0.1 on Mon Apr 6 23:52:11 2015 | http://epydoc.sourceforge.net |