2007
DOI: 10.1073/pnas.0709640104
|View full text |Cite
|
Sign up to set email alerts
|

Randomized algorithms for the low-rank approximation of matrices

Abstract: We describe two recently proposed randomized algorithms for the construction of low-rank approximations to matrices, and demonstrate their application (inter alia) to the evaluation of the singular value decompositions of numerically low-rank matrices. Being probabilistic, the schemes described here have a finite probability of failure; in most cases, this probability is rather negligible (10 ؊17 is a typical value). In many situations, the new procedures are considerably more efficient and reliable than the c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

11
465
0
1

Year Published

2010
2010
2020
2020

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 463 publications
(478 citation statements)
references
References 579 publications
(903 reference statements)
11
465
0
1
Order By: Relevance
“…The speed of the quantum algorithm is maximized when the training data kernel matrix is dominated by a relatively small number of principal components. We note that there exist several heuristic sampling algorithms for the SVM [29] and, more generally, for finding eigenvalues or vectors of low-rank matrices [30,31]. Information-theoretic arguments show that classically finding a low-rank matrix approximation is lower bounded by ΩðMÞ in the absence of prior knowledge [32], suggesting a similar lower bound for the least-squares SVM.…”
mentioning
confidence: 97%
“…The speed of the quantum algorithm is maximized when the training data kernel matrix is dominated by a relatively small number of principal components. We note that there exist several heuristic sampling algorithms for the SVM [29] and, more generally, for finding eigenvalues or vectors of low-rank matrices [30,31]. Information-theoretic arguments show that classically finding a low-rank matrix approximation is lower bounded by ΩðMÞ in the absence of prior knowledge [32], suggesting a similar lower bound for the least-squares SVM.…”
mentioning
confidence: 97%
“…For a large enough sampling, the effect of H is essentially captured with an emphasis on the correlations, therefore taking more from the signal than from the noise. This random sampling considerably reduces the size of the problem and has already been used in the analysis of very large datasets (23,26,27).…”
Section: Methodsmentioning
confidence: 99%
“…We propose here a simplified algorithm based on a random sampling of the transfer function associated with the matrix viewed as an operator. For large enough samplings, the data consistency is ensured through a variant of the Johnson-Lindenstrauss lemma (21,23).…”
Section: Methodsmentioning
confidence: 99%
“…For a numerically low rank matrix, its approximate singular value decomposition can be efficiently evaluated using randomized algorithms (see e.g. [33,34,35]). The idea is briefly reviewed as below, though presented in a slightly non-standard way.…”
Section: Spectrum Sweeping Methods For Estimating Spectral Densitiesmentioning
confidence: 99%
“…(33) states that the trace of the matrix function f (A) can be accurately computed from the integral of f (s)φ σ (s), which is a now smooth function. Since the spectrum of A is assumed to be in the interval (−1, 1), the integration range in Eq.…”
Section: Application To Trace Estimation Of General Matrix Functionsmentioning
confidence: 99%