Home | Trees | Indices | Help |
---|
|
object --+ | properties.Function --+ | properties.CompositeFunction --+ | object --+ | | | properties.Gradient --+ | object --+ | | | properties.StronglyConvex --+ | object --+ | | | properties.Penalty --+ | object --+ | | | properties.ProximalOperator --+ | RidgeSquaredError
Represents a ridge squared error penalty, i.e. a representation of f(x) = l.((1 / (2 * n)) * ||Xb - y||²_2 + (k / 2) * ||b||²_2), where ||.||²_2 is the L2 norm. Parameters ---------- l : Non-negative float. The Lagrange multiplier, or regularisation constant, of the function. X : Numpy array (n-by-p). The regressor matrix. y : Numpy array (n-by-1). The regressand vector. k : Non-negative float. The ridge parameter. penalty_start : Non-negative integer. The number of columns, variables etc., to except from penalisation. Equivalently, the first index to be penalised. Default is 0, all columns are included. mean : Boolean. Whether to compute the squared loss or the mean squared loss. Default is True, the mean squared loss.
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
Inherited from Inherited from Inherited from |
|
|||
__abstractmethods__ =
|
|||
Inherited from Inherited from |
|
|||
Inherited from |
|
x.__init__(...) initializes x; see help(type(x)) for signature
|
Free any cached computations from previous use of this Function. From the interface "Function".
|
Function value. From the interface "Function". Parameters ---------- x : Numpy array. Regression coefficient vector. The point at which to evaluate the function.
|
Gradient of the function at beta. From the interface "Gradient". Parameters ---------- x : Numpy array. The point at which to evaluate the gradient. Examples -------- >>> import numpy as np >>> from parsimony.functions.losses import RidgeRegression >>> >>> np.random.seed(42) >>> X = np.random.rand(100, 150) >>> y = np.random.rand(100, 1) >>> rr = RidgeRegression(X=X, y=y, k=3.14159265) >>> beta = np.random.rand(150, 1) >>> round(np.linalg.norm(rr.grad(beta) ... - rr.approx_grad(beta, eps=1e-4)), 9) 1.3e-08
|
Lipschitz constant of the gradient. From the interface "LipschitzContinuousGradient". |
Returns the strongly convex parameter for the function. From the interface "StronglyConvex".
|
The proximal operator associated to this function. Parameters ---------- x : Numpy array (p-by-1). The point at which to apply the proximal operator. factor : Positive float. A factor by which the Lagrange multiplier is scaled. This is usually the step size. eps : Positive float. This is the stopping criterion for inexact proximal methods, where the proximal operator is approximated numerically. max_iter : Positive integer. This is the maximum number of iterations for inexact proximal methods, where the proximal operator is approximated numerically. index : Non-negative integer. For multivariate functions, this identifies the variable for which the proximal operator is associated. From the interface "ProximalOperator".
|
Home | Trees | Indices | Help |
---|
Generated by Epydoc 3.0.1 on Mon Apr 6 23:52:11 2015 | http://epydoc.sourceforge.net |