2011
DOI: 10.1137/100784928
|View full text |Cite
|
Sign up to set email alerts
|

Convergence Analysis of Gradient Iterations for the Symmetric Eigenvalue Problem

Abstract: Abstract. Gradient iterations for the Rayleigh quotient are simple and robust solvers to determine a few of the smallest eigenvalues together with the associated eigenvectors of (generalized) matrix eigenvalue problems for symmetric matrices. Sharp convergence estimates for the Ritz values and Ritz vectors are derived for various steepest descent/ascent gradient iterations. The analysis shows that poorest convergence of the eigenvalue approximations is attained in a three-dimensional invariant subspace; explic… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

1
29
0

Year Published

2011
2011
2023
2023

Publication Types

Select...
5

Relationship

3
2

Authors

Journals

citations
Cited by 14 publications
(30 citation statements)
references
References 13 publications
(8 reference statements)
1
29
0
Order By: Relevance
“…For the vectorial A-gradient iteration (3) in the case M D I , a sharp convergence estimate has recently been proved by Theorem 4.1 in [22], which generalizes the convergence estimate of Knyazev and Skorokhodov in [17] where only the final eigenvalue interval is considered, that is, . 1 , 2 / for steepest descent and .…”
Section: Introductionmentioning
confidence: 76%
See 3 more Smart Citations
“…For the vectorial A-gradient iteration (3) in the case M D I , a sharp convergence estimate has recently been proved by Theorem 4.1 in [22], which generalizes the convergence estimate of Knyazev and Skorokhodov in [17] where only the final eigenvalue interval is considered, that is, . 1 , 2 / for steepest descent and .…”
Section: Introductionmentioning
confidence: 76%
“…Here, we analyze the general case of intervals ( λ i , λ i + 1 ) with i ∈ {1, … , n − 1}. The estimate in provides the analytical ground for the formulation of four Ritz value estimates, which read as follows (see also Theorem 4.1 in ): Theorem Consider a symmetric matrix A with eigenpairs ( λ i , x i ), λ 1 < λ 2 < … < λ n . Let x MathClass-rel∈double-struckRn with the Rayleigh quotient λ : = ρ A ( x ) ∈ ( λ i , λ i + 1 ) for a certain index i ∈ {1, … , n − 1}.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…For computing the smallest eigenvalues of the matrix pair ( A , M ), the invert‐Lanczos process is more efficient . It generates Krylov subspaces with respect to A −1 M and generalizes the vector iteration x ( i +1) = x ( i ) − ω A −1 r ( i ) with the residual r ( i ) = A x ( i ) − ρ ( x ( i ) ) M x ( i ) for the Rayleigh quotient ρ:Rn\false{0false}double-struckR,1emρfalse(xfalse)=xTAxxTMx. The vector A −1 r ( i ) is collinear with the A ‐gradient of ρ (·) and leads to a better performance in comparison with the iteration x ( i +1) = x ( i ) − ω r ( i ) while approximating the smallest eigenvalue . Practically, the product of A −1 and r ( i ) is computed by solving the corresponding linear system by means of preconditioned iterations .…”
Section: Introductionmentioning
confidence: 99%