2008
DOI: 10.1007/s11831-008-9026-x
|View full text |Cite
|
Sign up to set email alerts
|

High Performance Inverse Preconditioning

Abstract: The derivation of parallel numerical algorithms for solving sparse linear systems on modern computer systems and software platforms has attracted the attention of many researchers over the years. In this paper we present an overview on the design issues of parallel approximate inverse matrix algorithms, based on an antidiagonal "wave pattern" approach and a "fish-bone" computational procedure, for computing explicitly various families of exact and approximate inverses for solving sparse linear systems. Paralle… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
38
0

Year Published

2011
2011
2021
2021

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 36 publications
(38 citation statements)
references
References 55 publications
0
38
0
Order By: Relevance
“…Gravvanis et al [14], [15] attempt to accelerate a SAI preconditioned BiCGStab iterative solver on Intel multicore architecture by allocating the computation of each iteration of the iterative solver to a different thread; implementation details on how to accelerate the preconditioner computation on a multicore are not presented in this work. Xu et al [16] accelerate factorized SAI on NVIDIA GPUs.…”
Section: Sai Preconditioningmentioning
confidence: 99%
See 1 more Smart Citation
“…Gravvanis et al [14], [15] attempt to accelerate a SAI preconditioned BiCGStab iterative solver on Intel multicore architecture by allocating the computation of each iteration of the iterative solver to a different thread; implementation details on how to accelerate the preconditioner computation on a multicore are not presented in this work. Xu et al [16] accelerate factorized SAI on NVIDIA GPUs.…”
Section: Sai Preconditioningmentioning
confidence: 99%
“…By generating a denser preconditioner, SAI preconditioning can reduce iterations in iterative solvers considerably and be applied to a broad range of applications. Previous work has accelerated the computation of this preconditioner on multiple processors [5], [6], [7], [8], [9], [10], [11], [12], [13] as well as multicore [14], [15] and manycore architecture [16].…”
Section: Introductionmentioning
confidence: 99%
“…When thread 1 has finished the computation of the element m 7,6 and its symmetric counterpart, then the thread with rank 2 is ready to start the computation of the element m 6,6 . The data dependency pattern follows the antidiagonal motion (wave pattern approach) described in Giannoutakis and Gravvanis (2008), Gravvanis (2009) and Gravvanis and Giannoutakis (2006):…”
Section: Design Of Inverses Using Posix Threadsmentioning
confidence: 99%
“…The convergence of iterative methods can be improved by preconditioners such as the Successive Over Relaxation (SOR) preconditioner [23] and sparse approximate inverse preconditioners that are based on factorized sparse approximate inverses or on the minimization of some convenient norm [12,21]. Recently, explicit approximate inverse preconditioners have been introduced for solving sparse linear systems [17,19,20]. In [7], interested readers will find issues for implementing iterative methods in a sequential manner.…”
Section: Related Workmentioning
confidence: 99%