2019
DOI: 10.1002/nla.2234
|View full text |Cite
|
Sign up to set email alerts
|

Sampling and multilevel coarsening algorithms for fast matrix approximations

Abstract: Summary This paper addresses matrix approximation problems for matrices that are large, sparse, and/or representations of large graphs. To tackle these problems, we consider algorithms that are based primarily on coarsening techniques, possibly combined with random sampling. A multilevel coarsening technique is proposed, which utilizes a hypergraph associated with the data matrix and a graph coarsening strategy based on column matching. We consider a number of standard applications of this technique as well as… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
7
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3

Relationship

2
5

Authors

Journals

citations
Cited by 9 publications
(7 citation statements)
references
References 71 publications
(223 reference statements)
0
7
0
Order By: Relevance
“…Many modern applications involving large-dimensional data resort to data reduction techniques such as matrix approximation [14,33] or compression [29] in order to lower the amount of data processed, and to speed up computations. In recent years, randomized sketching techniques have attracted much attention in the literature for computing such matrix approximations, due to their high efficiency and well-established theoretical guarantees [19,14,34].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Many modern applications involving large-dimensional data resort to data reduction techniques such as matrix approximation [14,33] or compression [29] in order to lower the amount of data processed, and to speed up computations. In recent years, randomized sketching techniques have attracted much attention in the literature for computing such matrix approximations, due to their high efficiency and well-established theoretical guarantees [19,14,34].…”
Section: Introductionmentioning
confidence: 99%
“…Sketching methods have been used to speed up numerical linear algebra problems such as least squares regression, low-rank approximation, matrix multiplication, and approximating leverage scores [28,11,14,34,32]. These primitives as well as improved matrix algorithms have been developed based on sketching for various tasks in machine learning [5,37], signal processing [9,26], scientific computing [33], and optimization [25,23].…”
Section: Introductionmentioning
confidence: 99%
“…M ATRICES with low-rank structure are omnipresent in signal processing, data analysis, scientific computing and machine learning applications including system identification [1], subspace clustering [2], matrix completion [3], background subtraction [4], [5], least-squares regression [6], hyperspectral imaging [7], [8], anomaly detection [9], [10], subspace estimation over networks [11], genomics [12], [13], tensor decompositions [14], and sparse matrix problems [15].…”
Section: Introductionmentioning
confidence: 99%
“…to speed up numerical linear algebra problems such as least squares regression, low-rank approximation, matrix multiplication, and approximating leverage scores [28,11,14,34,32]. These primitives as well as improved matrix algorithms have been developed based on sketching for various tasks in machine learning [5,37], signal processing [9,26], scientific computing [33], and optimization [25,23].…”
mentioning
confidence: 99%
“…But, for a good rank-k approximation, the sketch size will have to be m = O k 2 2 [34]. Applications involving low-rank approximation of large sparse matrices include Latent Semantic Indexing (LSI) [20], matrix completion problems [15], subspace tracking in signal processing [36] and others [33].…”
mentioning
confidence: 99%