run(self,
X,
Y,
start_vector=None,
eps=5e-08,
max_iter=100,
min_iter=1)
| source code
|
A kernel SVD implementation of a product of two matrices, X and Y.
I.e. the SVD of np.dot(X, Y), but the SVD is computed without actually
computing the matrix product.
Performs SVD of a given matrix. This is always faster than
np.linalg.svd when extracting only one, or a few, vectors.
Parameters
----------
X : Numpy array with shape (n, p). The first matrix of the product.
Y : Numpy array with shape (p, m). The second matrix of the product.
start_vector : Numpy array. The start vector.
eps : Float. Tolerance.
max_iter : Integer. Maximum number of iterations.
min_iter : Integer. Minimum number of iterations.
Returns
-------
v : Numpy array. The right singular vector of np.dot(X, Y) that
corresponds to the largest singular value of np.dot(X, Y).
Example
-------
>>> import numpy as np
>>> from parsimony.algorithms.nipals import FastSVDProduct
>>> np.random.seed(0)
>>> X = np.random.random((15,10))
>>> Y = np.random.random((10,5))
>>> fast_svd = FastSVDProduct()
>>> fast_svd.run(X, Y)
array([[ 0.47169804],
[ 0.38956366],
[ 0.41397845],
[ 0.52493576],
[ 0.42285389]])
- Overrides:
bases.ImplicitAlgorithm.run
|