run(self,
X,
max_iter=100,
eps=5e-08,
start_vector=None)
| source code
|
A kernel SVD implementation.
Performs SVD of given matrix. This is always faster than np.linalg.svd.
Particularly, this is a lot faster than np.linalg.svd when M << N or
M >> N, for an M-by-N matrix.
Parameters
----------
X : Numpy array. The matrix to decompose.
max_iter : Non-negative integer. Maximum allowed number of iterations.
Default is 100.
eps : Positive float. The tolerance used by the stopping criterion.
start_vector : BaseStartVector. A start vector generator. Default is
to use a random start vector.
Returns
-------
v : The right singular vector of X that corresponds to the largest
singular value.
Examples
--------
>>> import numpy as np
>>> from parsimony.algorithms.nipals import FastSVD
>>>
>>> np.random.seed(0)
>>> X = np.random.random((10, 10))
>>> fast_svd = FastSVD()
>>> fast_svd.run(X)
array([[-0.3522974 ],
[-0.35647707],
[-0.35190104],
[-0.34715338],
[-0.19594198],
[-0.24103104],
[-0.25578904],
[-0.29501092],
[-0.42311297],
[-0.27656382]])
>>>
>>> np.random.seed(0)
>>> X = np.random.random((100, 150))
>>> fast_svd = FastSVD()
>>> v = fast_svd.run(X)
>>> us = np.linalg.norm(np.dot(X, v))
>>> s = np.linalg.svd(X, full_matrices=False, compute_uv=False)
>>> abs(np.sum(us ** 2.0) - np.max(s) ** 2.0) < 5e-13
True
>>>
>>> np.random.seed(0)
>>> X = np.random.random((100, 50))
>>> fast_svd = FastSVD()
>>> v = fast_svd.run(X)
>>> us = np.linalg.norm(np.dot(X, v))
>>> s = np.linalg.svd(X, full_matrices=False, compute_uv=False)
>>> abs(np.sum(us ** 2.0) - np.max(s) ** 2.0)
4.5474735088646412e-13
- Overrides:
bases.ImplicitAlgorithm.run
|