2011
DOI: 10.1137/090771806
|View full text |Cite
|
Sign up to set email alerts
|

Finding Structure with Randomness: Probabilistic Algorithms for Constructing Approximate Matrix Decompositions

Abstract: Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets.This paper prese… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

31
4,279
0
3

Year Published

2014
2014
2017
2017

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 3,217 publications
(4,432 citation statements)
references
References 117 publications
31
4,279
0
3
Order By: Relevance
“…to [1,7]), compute the SVD of the small m × n matrix A := U * A V = U V * and obtain an approximate tall-skinny SVD as A ≈ ( U U ) ( V V ) * , represented explicitly as a low-rank (rank min( m, n)) matrix. Some of the columns of U U and V V then approximate the exact left and right singular vectors of A.…”
Section: B Yuji Nakatsukasamentioning
confidence: 99%
See 1 more Smart Citation
“…to [1,7]), compute the SVD of the small m × n matrix A := U * A V = U V * and obtain an approximate tall-skinny SVD as A ≈ ( U U ) ( V V ) * , represented explicitly as a low-rank (rank min( m, n)) matrix. Some of the columns of U U and V V then approximate the exact left and right singular vectors of A.…”
Section: B Yuji Nakatsukasamentioning
confidence: 99%
“…Some of the state-of-the-art algorithms for an approximate SVD, such as [6,7], rely on one-sided projection methods, instead of two-sided projection as described so far. In this case one would approximate A ≈ ( U U ) V * , where U V * = U * A is the SVD of the m × n matrix, obtained by projecting A only from the left side by U (or from the right by V ; we discuss left projection for definiteness).…”
Section: When One-sided Projection Is Usedmentioning
confidence: 99%
“…For example, a dominant part of the dictionary construction process is the low-rank Nyström approximation provided by the constructed ONM (see Definition 4.1 and Section 4). An alternative approach for obtaining such an approximation is to use a randomized method such as the methods presented in [18]. Such an implementation will be explored in future works, and the impact of this change on the distance approximation error will be analyzed.…”
Section: Future Work and Possible Implementation Optimizationsmentioning
confidence: 99%
“…To this end, we can borrow approaches developed in the literature of randomized linear algebra (e.g. [19] and references therein). The main idea is that matrix A is replaced byà = PAP T , where P is a suitably chosen fat sketching matrix P ∈ R k×n , where k n [20].…”
Section: Iterative Refinement On Approximate Covariance Matricesmentioning
confidence: 99%