2016
DOI: 10.1080/03081087.2016.1267104
|View full text |Cite
|
Sign up to set email alerts
|

Literature survey on low rank approximation of matrices

Abstract: Low rank approximation of matrices has been well studied in literature. Singular value decomposition, QR decomposition with column pivoting, rank revealing QR factorization (RRQR), Interpolative decomposition etc are classical deterministic algorithms for low rank approximation. But these techniques are very expensive (O(n 3 ) operations are required for n × n matrices). There are several randomized algorithms available in the literature which are not so expensive as the classical techniques (but the complexit… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
25
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 117 publications
(27 citation statements)
references
References 147 publications
(254 reference statements)
0
25
0
Order By: Relevance
“…Clearly, each segment, except those on level L, have two child segments. In a nutshell, A ( ) considers the interaction at level between a segment and its interaction list, and A (ad) considers all the interactions between adjacent segments at level L. The key idea behind H-matrices is to approximate the nonzero blocks A ( ) by a low rank approximation (see [34] for a thorough review). This idea is depicted in Fig.…”
Section: H-matricesmentioning
confidence: 99%
“…Clearly, each segment, except those on level L, have two child segments. In a nutshell, A ( ) considers the interaction at level between a segment and its interaction list, and A (ad) considers all the interactions between adjacent segments at level L. The key idea behind H-matrices is to approximate the nonzero blocks A ( ) by a low rank approximation (see [34] for a thorough review). This idea is depicted in Fig.…”
Section: H-matricesmentioning
confidence: 99%
“…The bottleneck is computing its inverse which has complexity O(N 3 ). Similarly to other well-established machine learning algorithms which share this bottleneck, one could make use of approximations that would trade off accuracy for computational expenses [17]. We also note that the per iteration complexity scales linearly in N , due to the normalization step.…”
Section: Time Complexitymentioning
confidence: 93%
“…The focus of this paper is not the 'best heuristic' for the matrix case, but the extension to high-dimensional problems. We refer the reader to [53] for the review of matrix low-rank approximation algorithms.…”
Section: Algorithm 2 One Step Of the Matrix Cross Interpolation Algormentioning
confidence: 99%