2019
DOI: 10.1137/18m1210101
|View full text |Cite
|
Sign up to set email alerts
|

Randomized GPU Algorithms for the Construction of Hierarchical Matrices from Matrix-Vector Operations

Abstract: Randomized algorithms for the generation of low rank approximations of large dense matrices have become popular methods in scientific computing and machine learning. In this paper, we extend the scope of these methods and present batched GPU randomized algorithms for the efficient generation of low rank representations of large sets of small dense matrices, as well as their generalization to the construction of hierarchically low rank symmetric H 2 matrices with general partitioning structures. In both cases, … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
22
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
6

Relationship

1
5

Authors

Journals

citations
Cited by 17 publications
(22 citation statements)
references
References 29 publications
(41 reference statements)
0
22
0
Order By: Relevance
“…We start by defining a GP on the domain Y . Let R Y ×Y K be the restriction 5 of the covariance kernel K to the domain Y × Y , which is a continuous symmetric positive definite kernel so that GP(0, R Y ×Y K ) defines a GP on Y . We choose a target rank k ≥ 1, an oversampling parameter p ≥ 2, and form a quasimatrix…”
Section: Randomized Svd For Admissible Domainsmentioning
confidence: 99%
“…We start by defining a GP on the domain Y . Let R Y ×Y K be the restriction 5 of the covariance kernel K to the domain Y × Y , which is a continuous symmetric positive definite kernel so that GP(0, R Y ×Y K ) defines a GP on Y . We choose a target rank k ≥ 1, an oversampling parameter p ≥ 2, and form a quasimatrix…”
Section: Randomized Svd For Admissible Domainsmentioning
confidence: 99%
“…Performance is obtained because the large amount of compute-intensive factorizations, both QR and SVD, that are performed at every level can be efficiently executed by batched kernels. We have developed batched QR and batched adaptive randomized SVD operations for this purpose [26,27].…”
Section: (B) Linear Algebra Operations With Hierarchical Matricesmentioning
confidence: 99%
“…We build on the hierarchical matrix vector and the low-rank update operations, to develop an algorithm for constructing a hierarchical matrix approximation of a 'black box' matrix that is accessible only via matrix-vector products [27]. The algorithm generalizes the popular randomized algorithms for generating low-rank approximations of large dense matrices [13] to the case of general hierarchical matrices H 2 .…”
Section: (B) Linear Algebra Operations With Hierarchical Matricesmentioning
confidence: 99%
“…which produces the sum of a hierarchical matrix A of size N × N and a globally low rank matrix whose X and Y factors are of size N × k with k N . This operation can be efficiently implemented [30,54] by first adding the contributions of XY T to the various blocks of A at all levels, and recompressing the resulting sum algebraically as described earlier. The low rank update operation is a key routine for an operation that generates an explicit hierarchical matrix representation of an operator accessible only via matrix vector products.…”
Section: General Linear Algebra Operations On Hierarchical Matricesmentioning
confidence: 99%
“…This can be performed efficiently using Horner's method for evaluating polynomials. The resulting products are used in the level by level construction of the hierarchical matrix X p+1 [54]. We use methods of order 16 to construct the approximate inverse preconditioners used in the results shown below.…”
Section: General Linear Algebra Operations On Hierarchical Matricesmentioning
confidence: 99%