2018
DOI: 10.1137/17m1151158
|View full text |Cite
|
Sign up to set email alerts
|

Low-Rank Factorizations in Data Sparse Hierarchical Algorithms for Preconditioning Symmetric Positive Definite Matrices

Abstract: We consider the problem of choosing low-rank factorizations in data sparse matrix approximations for preconditioning large scale symmetric positive definite matrices. These approximations are memory efficient schemes that rely on hierarchical matrix partitioning and compression of certain sub-blocks of the matrix. Typically, these matrix approximations can be constructed very fast, and their matrix product can be applied rapidly as well. The common practice is to express the compressed sub-blocks by low-rank f… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(7 citation statements)
references
References 35 publications
0
7
0
Order By: Relevance
“…It appears that, up to a certain level, we are still able to find a robust coarse space despite having significantly reduced apply. This is left for future work (see [1] for preliminary investigations in this direction) and we do 568 not consider approximation techniques in the high performance implementation we propose below. quired for computing dot products while the matrix-vector product can be performed concurrently on each subdomain and the application of the (one-level) preconditioner only requires neighbor-toneighbor communications.…”
Section: Approximate Casementioning
confidence: 99%
“…It appears that, up to a certain level, we are still able to find a robust coarse space despite having significantly reduced apply. This is left for future work (see [1] for preliminary investigations in this direction) and we do 568 not consider approximation techniques in the high performance implementation we propose below. quired for computing dot products while the matrix-vector product can be performed concurrently on each subdomain and the application of the (one-level) preconditioner only requires neighbor-toneighbor communications.…”
Section: Approximate Casementioning
confidence: 99%
“…Here, we show that both types of schemes are similarly effective and also give a more intuitive explanation of the effectiveness by extending Equation (10). 3is k. For the Cholesky SIF factorL in Equation (6), the Equation (9) holds with the nonzero singular values ofĈ in Equation (10) given by…”
Section: Spectral Analysis For Cholesky and Ulv Sif Preconditioningmentioning
confidence: 99%
“…where the last equality in the second line is due to the Sherman-Morrison-Woodbury formula and the resultŨ T 2Û 2 = 0. The eigenvalues ofL −1 AL −T can then be immediately obtained based on Equation (9).…”
Section: Theorem 1 Suppose the Smaller Of The Row And Column Sizes Omentioning
confidence: 99%
See 1 more Smart Citation
“…One type is in [11,12,13,18] based on low-rank strategies for approximating A −1 . Another type is in [1,6,8,9,14,19,21,22] where approximate Cholesky factorizations are computed using low-rank approximations of relevant off-diagonal blocks. Both types of methods have been shown useful for many applications.…”
mentioning
confidence: 99%