2015
DOI: 10.1137/140976017
|View full text |Cite
|
Sign up to set email alerts
|

Increasing the Performance of the Jacobi--Davidson Method by Blocking

Abstract: Block variants of the Jacobi-Davidson method for computing a few eigenpairs of a large sparse matrix are known to improve the robustness of the standard algorithm, but are generally shunned because the total number of floating-point operations increases. In this paper we present the implementation of a block Jacobi-Davidson solver. By detailed performance engineering and numerical experiments we demonstrate that the increase in operations is typically more than compensated by performance gains on modern archit… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
37
0

Year Published

2016
2016
2020
2020

Publication Types

Select...
5
3

Relationship

7
1

Authors

Journals

citations
Cited by 25 publications
(38 citation statements)
references
References 35 publications
1
37
0
Order By: Relevance
“…More recently, Liu et al [1] investigated strategies to improve the performance of SpMM 1 using SIMD (AVX/SSE) instructions for Stokesian dynamics simulation of biological macromolecules on modern multicore CPUs. Röhrig-Zöllner et al [19] discuss performance optimization techniques for the block Jacobi-Davidson method to compute a few eigenpairs of large-scale sparse matrices, and report reduced time-to-solution using block methods over single vector counterparts for quantum mechanics problems and PDEs. Finally, Anzt et al [20] describe an SpMM implementation based on the SELLC matrix format, and show that performance improvements in the SpMM kernel can translate into performance improvements in a block eigensolver running on GPUs.…”
Section: Introductionmentioning
confidence: 99%
“…More recently, Liu et al [1] investigated strategies to improve the performance of SpMM 1 using SIMD (AVX/SSE) instructions for Stokesian dynamics simulation of biological macromolecules on modern multicore CPUs. Röhrig-Zöllner et al [19] discuss performance optimization techniques for the block Jacobi-Davidson method to compute a few eigenpairs of large-scale sparse matrices, and report reduced time-to-solution using block methods over single vector counterparts for quantum mechanics problems and PDEs. Finally, Anzt et al [20] describe an SpMM implementation based on the SELLC matrix format, and show that performance improvements in the SpMM kernel can translate into performance improvements in a block eigensolver running on GPUs.…”
Section: Introductionmentioning
confidence: 99%
“…For instance, by combining multiple, consecutive sparse matrix-vector (SpMV) multiplications to a sparse matrix-multiple-vector (SpMMV) multiplication, the matrix entries are loaded only once and used for the multiple vectors, which reduces the overall memory traffic and consequently increases performance of this memory-bound operation. This has first been analytically shown in Gropp et al (1999) and is used in many applications; see, e.g., Röhrig-Zöllner et al (2015); Kreutzer et al (2018).…”
Section: Applicationmentioning
confidence: 95%
“…In contrast to the standard Jacobi-Davidson method, which determines the sought eigenpairs one-by-one, the block Jacobi-Davison method in ESSEX [24] computes them by groups. Here we will consider only the real standard eigenvalue problem Av i = v i λ i .…”
Section: Using Higher Precision For Robust and Fast Orthogonalizationmentioning
confidence: 99%