2014
DOI: 10.1137/140973463
|View full text |Cite
|
Sign up to set email alerts
|

Convergence of Restarted Krylov Subspace Methods for Stieltjes Functions of Matrices

Abstract: Abstract. To approximate f (A)b-the action of a matrix function on a vector-by a Krylov subspace method, restarts may become mandatory due to storage requirements for the Arnoldi basis or due to the growing computational complexity of evaluating f on a Hessenberg matrix of growing size. A number of restarting methods have been proposed in the literature in recent years and there has been substantial algorithmic advancement concerning their stability and computational efficiency. However, the question under whi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
71
0

Year Published

2015
2015
2021
2021

Publication Types

Select...
8

Relationship

3
5

Authors

Journals

citations
Cited by 45 publications
(72 citation statements)
references
References 25 publications
1
71
0
Order By: Relevance
“…We showed that the relaxed strategy alleviates this problem whenever accurate approximations are required. However, for particular selections of matrices and functions, the approximation of f (A)v can still be very expensive, and some other strategies could be exploited, such as restarting; see, e.g., [10], [11], [17] and references therein. Finally, our approach could be used to estimate the norm of other matrix objects, such as the geometric mean [3], or the derivatives of matrix functions, such as the Fréchet derivative of the matrix exponential or of other functions [18].…”
Section: Final Considerationsmentioning
confidence: 99%
“…We showed that the relaxed strategy alleviates this problem whenever accurate approximations are required. However, for particular selections of matrices and functions, the approximation of f (A)v can still be very expensive, and some other strategies could be exploited, such as restarting; see, e.g., [10], [11], [17] and references therein. Finally, our approach could be used to estimate the norm of other matrix objects, such as the geometric mean [3], or the derivatives of matrix functions, such as the Fréchet derivative of the matrix exponential or of other functions [18].…”
Section: Final Considerationsmentioning
confidence: 99%
“…Therefore, a matrix vector multiplication D N χ is obtained via an additional "sign function iteration" which approximates sign(Γ 5 D W )χ as part of the computation of D N χ. For this sign function iteration we use the restarted Krylov subspace method proposed recently in [15,16] which allows for thick restarts of the Arnoldi process and has proven to be among the most efficient methods to approximate sign(Γ 5 D W )χ. The sign function iteration then still represents the by far most expensive part of the overall computation.…”
Section: Quality and Cost Of The Preconditionermentioning
confidence: 99%
“…Shifted linear systems, here referred to as multi-mass shifted systems, have to be solved if the square root of a matrix is evaluated via rational, polynomial approximations (see e.g. [1,2]) or by using an integral definition via Stieltjes function [3,4]. In general the matrix roots have to be calculated in case of Monte Carlo simulations involving a single quark [2].…”
Section: Introductionmentioning
confidence: 99%
“…The approaches routinely used to solve shifted linear equations like Eq. (1.1) are given by Krylov space solvers [3,4,[6][7][8]. In what follows we will refer to the multi-mass shift conjugated gradient (MMS-CG) solver as the "standard" solver [6,7].…”
Section: Introductionmentioning
confidence: 99%