2017
DOI: 10.1137/15m1046939
|View full text |Cite
|
Sign up to set email alerts
|

Fast Hierarchical Solvers For Sparse Matrices Using Extended Sparsification and Low-Rank Approximation

Abstract: Abstract. Inversion of sparse matrices with standard direct solve schemes is robust, but computationally expensive. Iterative solvers, on the other hand, demonstrate better scalability, but need to be used with an appropriate preconditioner (e.g., ILU, AMG, Gauss-Seidel, etc.) for proper convergence. The choice of an effective preconditioner is highly problem dependent. We propose a novel fully algebraic sparse matrix solve algorithm, which has linear complexity with the problem size. Our scheme is based on th… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

0
83
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
7

Relationship

3
4

Authors

Journals

citations
Cited by 39 publications
(84 citation statements)
references
References 55 publications
0
83
0
Order By: Relevance
“…In comparison to the quadratic complexity of the full-rank solver, most sparse solvers based on hierarchical formats have been shown to possess near-linear complexity. To cite a few, [40,39,22] are HSS-based, [24] is HBS-based, [7] is HODLR-based, and [34] is H 2 -based. Other related work includes [28], a multifrontal solver with front skeletonization, and [37], a Cholesky solver with fill-in compression.…”
mentioning
confidence: 99%
“…In comparison to the quadratic complexity of the full-rank solver, most sparse solvers based on hierarchical formats have been shown to possess near-linear complexity. To cite a few, [40,39,22] are HSS-based, [24] is HBS-based, [7] is HODLR-based, and [34] is H 2 -based. Other related work includes [28], a multifrontal solver with front skeletonization, and [37], a Cholesky solver with fill-in compression.…”
mentioning
confidence: 99%
“…In [17], an H 2 sparse algorithm was described. It is similar in many respects to [14], and extends the work of [16]. All these solvers have a guaranteed linear complexity, for a given error tolerance, and assuming a bounded rank for all well-separated pairs of clusters (the admissibility criterion in Hackbusch et al's terminology).…”
Section: Introductionmentioning
confidence: 96%
“…H 2 arithmetic [13] have been used in several sparse solvers. In [14], a fast sparse H 2 solver, called LoRaSp, based on extended sparsification was introduced. In [15], a variant of LoRaSp, aiming at improving the quality of the solver when used as a preconditioner, was presented, as well as a numerical analysis of the convergence with H 2 preconditioning.…”
Section: Introductionmentioning
confidence: 99%
“…More recently, Pouransari et al extended the approaches in Darve et al to solving sparse linear systems, assuming that the fill‐ins during a block Gaussian elimination have low numerical rank for well‐separated interactions. An approximate factorization has been introduced via extended sparsification, which has time and memory complexity of scriptOfalse(n1ptlog21false/ϵfalse) and scriptOfalse(nlog1false/ϵfalse), respectively. Here, ϵ is the accuracy of the low‐rank approximations.…”
Section: Introductionmentioning
confidence: 99%
“…Despite the high accuracy of approximate factorizations, the robustness of this type of approach is subject to the condition number of the original problem. In Pouransari et al, rather than solving directly, the following equation is solved instead: AscriptHxscriptH=b2emwith2emfalse‖AscriptHAfalse‖ϵfalse‖Afalse‖. Then, the accuracy of the solution is subject to both ϵ and κ ( A )=‖ A ‖‖ A −1 ‖ xxHx Cϵκ(A), where C is a constant. As the upper bound for the relative accuracy is proportional to the matrix condition number, the accuracy deteriorates when the condition number of matrix grows.…”
Section: Introductionmentioning
confidence: 99%