Home | Trees | Indices | Help |
---|
|
object --+ | properties.Function --+ | properties.CompositeFunction --+ | properties.MultiblockFunction --+ | object --+ | | | properties.MultiblockGradient --+ | object --+ | | | properties.MultiblockLipschitzContinuousGradient --+ | LatentVariableCovariance
Represents Cov(X.w, Y.c) = (1 / (n - 1)) * w'.X'.Y.c, where X.w and Y.c are latent variables. Parameters ---------- X : List with two numpy arrays. The two blocks. unbiased : Boolean. Whether or not to use biased or unbiased sample covariance. Default is True, the unbiased sample covariance is used.
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
Inherited from Inherited from Inherited from Inherited from |
|
|||
__abstractmethods__ =
|
|||
_abc_negative_cache_version = 14
|
|||
Inherited from Inherited from |
|
|||
Inherited from |
|
x.__init__(...) initializes x; see help(type(x)) for signature
|
Free any cached computations from previous use of this Function.
|
Function value. From the interface "Function".
|
Gradient of the function. From the interface "MultiblockGradient". Parameters ---------- w : List of numpy arrays. The weight vectors, w[index] is the point at which to evaluate the gradient. index : Non-negative integer. Which variable the gradient is for. Examples -------- >>> import numpy as np >>> from parsimony.functions.multiblock.losses import LatentVariableCovariance >>> >>> np.random.seed(42) >>> X = np.random.rand(100, 150) >>> Y = np.random.rand(100, 50) >>> w = np.random.rand(150, 1) >>> c = np.random.rand(50, 1) >>> cov = LatentVariableCovariance([X, Y]) >>> grad = cov.grad([w, c], 0) >>> approx_grad = cov.approx_grad([w, c], 0) >>> np.allclose(grad, approx_grad) True
|
Lipschitz constant of the gradient with given index. From the interface "MultiblockLipschitzContinuousGradient". |
Home | Trees | Indices | Help |
---|
Generated by Epydoc 3.0.1 on Mon Apr 6 23:52:11 2015 | http://epydoc.sourceforge.net |