2012
DOI: 10.1007/s00211-011-0434-8
|View full text |Cite
|
Sign up to set email alerts
|

Superlinear convergence of the rational Arnoldi method for the approximation of matrix functions

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
30
0

Year Published

2013
2013
2023
2023

Publication Types

Select...
6

Relationship

2
4

Authors

Journals

citations
Cited by 25 publications
(31 citation statements)
references
References 33 publications
1
30
0
Order By: Relevance
“…The reason for the rational Arnoldi methods behaving like this is that the Chebyshev eigenvalues are denser at the endpoints of the spectral interval, and almost no spectral adaption takes place during the first 50 iterations shown here. In some sense, the rational Krylov methods behave initially as if the spectrum were not discrete; see [6] for a potential theoretic explanation. We expect our adaptive method to choose roughly the same poles as were chosen in the generalized Leja case, and the plot below confirms this expectation by depicting the (smoothed) empirical distribution functions of the first 50 adaptive poles and generalized Leja poles; the two distributions are visually hard to distinguish.…”
Section: 1mentioning
confidence: 99%
See 3 more Smart Citations
“…The reason for the rational Arnoldi methods behaving like this is that the Chebyshev eigenvalues are denser at the endpoints of the spectral interval, and almost no spectral adaption takes place during the first 50 iterations shown here. In some sense, the rational Krylov methods behave initially as if the spectrum were not discrete; see [6] for a potential theoretic explanation. We expect our adaptive method to choose roughly the same poles as were chosen in the generalized Leja case, and the plot below confirms this expectation by depicting the (smoothed) empirical distribution functions of the first 50 adaptive poles and generalized Leja poles; the two distributions are visually hard to distinguish.…”
Section: 1mentioning
confidence: 99%
“…In Figure 5.1 (top right) we again show the convergence of the three methods. While the PAIN method still converges linearly with rate R given by (5.3), the standard rational Arnoldi method is somewhat faster because the interpolation nodes (Ritz values) "deflate" some of the left-most eigenvalues of A in early iterations, causing a superlinear convergence speedup (see [6] for an analysis of this effect). The adaptive rational Arnoldi method converges even faster than standard rational Arnoldi, because the poles of the rational Krylov space are selected by taking into account the deflation of left-most eigenvalues.…”
Section: 1mentioning
confidence: 99%
See 2 more Smart Citations
“…In Figure 1 (right) we show the convergence of Algorithm 1. Note that the convergence seems slightly faster than the linear rate of (10), particularly in later iterations, as superlinear convergence effects take place due to spectral adaption of the rational Arnoldi method [2]. The last example is more challenging: Consider a random diagonalizable matrix A ∈ C 200×200 having eigenvalues in the unit disk under the constraint that the distance of each eigenvalue to Γ is at least 0.1.…”
Section: Numerical Experimentsmentioning
confidence: 99%