2003
DOI: 10.1016/s0021-9991(03)00190-6
|View full text |Cite
|
Sign up to set email alerts
|

Computational experience with sequential and parallel, preconditioned Jacobi–Davidson for large, sparse symmetric matrices

Abstract: The Jacobi-Davidson (JD) algorithm was recently proposed for evaluating a number of the eigenvalues of a matrix. JD goes beyond pure Krylov-space techniques; it cleverly expands its search space, by solving the so-called correction equation, thus in principle providing a more powerful method. Preconditioning the Jacobi-Davidson correction equation is mandatory when large, sparse matrices are analyzed. We considered several preconditioners: Classical block-Jacobi, and IC(0), together with approximate inverse (A… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
21
0

Year Published

2005
2005
2016
2016

Publication Types

Select...
6
1

Relationship

3
4

Authors

Journals

citations
Cited by 20 publications
(21 citation statements)
references
References 39 publications
0
21
0
Order By: Relevance
“…Indeed, preconditioned CG converges when solving JD correction equations, though they are not positive definite. However, we showed in Reference [15] that CG performance in our problems is not any time superior to BiCGSTAB.…”
Section: Sequential Preconditioned Jdmentioning
confidence: 70%
“…Indeed, preconditioned CG converges when solving JD correction equations, though they are not positive definite. However, we showed in Reference [15] that CG performance in our problems is not any time superior to BiCGSTAB.…”
Section: Sequential Preconditioned Jdmentioning
confidence: 70%
“…Because its basic ingredients are matrix-vector products, vector updates and inner products it can be parallelized easily [1]. Its JDQR variant [8], which is based on the computation of a partial Schur form of the system matrix, uses deflation techniques to compute a number of interior eigenpairs.…”
Section: Jacobi-davidson Methodsmentioning
confidence: 99%
“…Finding the smallest eigenvalues and eigenvectors of a sparse and symmetric matrix is a computational problem which has been studied for decades for problems in physics and chemistry [21,22], and can be solved using distributed algorithms for parallel processing. In particular, if sensors select local cluster-heads, the distributed algorithm can use data-distribution techniques and block-Jacobi preconditioning methods to reduce communication.…”
Section: Computationmentioning
confidence: 99%
“…The dwMDS also has a slower increase in communication requirements than centralized localization algorithms as N increases [16]. Furthermore, since prior information is included directly in (22), there is no need to do post-processing.…”
Section: Computationmentioning
confidence: 99%