2015
DOI: 10.1142/9789814675772_0001
|View full text |Cite
|
Sign up to set email alerts
|

Matrix Functions: A Short Course

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0
1

Year Published

2016
2016
2023
2023

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 9 publications
(9 citation statements)
references
References 47 publications
(24 reference statements)
0
5
0
1
Order By: Relevance
“…In the case that the inverse‐matrix‐square‐root of covariance is unknown, we can not use a Cholesky factorization to perform this update for large‐scale problems. Instead, we employ a technique from numerical linear algebra, a combination of rational approximations and Krylov sub‐space methods (Aune et al ; Kennedy, ; Higham and Lin, ; Jegerlehner, ). These methods, outlined in the next section, are able to compute the product of the square‐root‐inverse of a matrix with arbitrary vectors – only requiring matrix‐vector products involving the matrix itself.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…In the case that the inverse‐matrix‐square‐root of covariance is unknown, we can not use a Cholesky factorization to perform this update for large‐scale problems. Instead, we employ a technique from numerical linear algebra, a combination of rational approximations and Krylov sub‐space methods (Aune et al ; Kennedy, ; Higham and Lin, ; Jegerlehner, ). These methods, outlined in the next section, are able to compute the product of the square‐root‐inverse of a matrix with arbitrary vectors – only requiring matrix‐vector products involving the matrix itself.…”
Section: Methodsmentioning
confidence: 99%
“…Krylov methods, such as the linear solver conjugate gradient and multi‐shift variants (Simpson, ; Jegerlehner, ), can solve linear systems with just matrix‐vector products. Rational approximations reduce the computation of matrix functions, such as the determinant, inverse or square root, to solving a family of linear equations – with error guarantees that are straight‐forward to control and can even be set to floating point precision (Higham and Lin, ; Kennedy, ). A combination of these methods has been successfully applied to sample from high‐dimensional Gaussian distributions (Aune et al ) and to perform maximum likelihood inference by estimating the log‐determinant term (Aune et al ).…”
Section: Introductionmentioning
confidence: 99%
“…When the ODE is affine, ẋ(t) = Lx(t)+r, and the initial condition is x(0) = 0, then its analytic solution is given by x(t) = tϕ 1 (tL)r, where ϕ 1 (tL) is the shifted Taylor expansion of the exponential given by ∞ j=0 (tL) j /(j + 1)! ( [11] p. 10). In case the initial condition is not centred in the origin, is it possible to translate the coordinate frame by y(t) := x(t) − x(0), so that y(0) = 0 and x(t) = y(t) + x(0).…”
Section: Numerical Computations Of the Exponential Of A Stationary Ve...mentioning
confidence: 99%
“…La norma de la derivada de Fréchet de una función matricial f : A ⊂ C n×n → C n×n aparece explícitamente en una expresión que da el número de condición relativo de f en X. De manera precisa (ver la Sección 3.1 de Higham [4], o la Sección 3.3 de Higham and Lijing [6]): . Este número mide la sensibilidad de f (X) a pequeños cambios en X. Hay trabajos recientes dedicados a la estimación de cotas para este número de condición en situaciones particulares; ver por ejemplo, Cardoso y Sadeghi [1], Deadman y Relton [2] o Kandolf y Relton [8], entre otros.…”
Section: Introductionunclassified