2015
DOI: 10.1002/nla.2026
|View full text |Cite
|
Sign up to set email alerts
|

Randomized algorithms for generalized Hermitian eigenvalue problems with application to computing Karhunen–Loève expansion

Abstract: SUMMARYWe describe randomized algorithms for computing the dominant eigenmodes of the Generalized Hermitian Eigenvalue Problem (GHEP) Ax = λBx, with A Hermitian and B Hermitian and positive definite. The algorithms we describe only require forming operations Ax, Bx and B −1 x and avoid forming square-roots of B (or operations of the form, B 1/2 x or B −1/2 x). We provide a convergence analysis and a posteriori error bounds that build upon the work of [13,16,18] (which have been derived for the case B = I). Add… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
61
0

Year Published

2016
2016
2021
2021

Publication Types

Select...
7

Relationship

2
5

Authors

Journals

citations
Cited by 59 publications
(61 citation statements)
references
References 21 publications
0
61
0
Order By: Relevance
“…It is important to note that while the mass matrix M is symmetric positive definite and may be sparse, T is symmetric positive semidefinite and dense. Since Q is dense, we applied a data sparse technique to store it with the hierarchical matrix ( scriptH‐matrix) format . Consequently, the computational cost of matrix‐vector products involving Q is reduced from scriptOfalse(n2false) to scriptOfalse(nlognfalse), with n the number of discretization points.…”
Section: Models and Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…It is important to note that while the mass matrix M is symmetric positive definite and may be sparse, T is symmetric positive semidefinite and dense. Since Q is dense, we applied a data sparse technique to store it with the hierarchical matrix ( scriptH‐matrix) format . Consequently, the computational cost of matrix‐vector products involving Q is reduced from scriptOfalse(n2false) to scriptOfalse(nlognfalse), with n the number of discretization points.…”
Section: Models and Methodsmentioning
confidence: 99%
“…Since Q is dense, we applied a data sparse technique to store it with the hierarchical matrix (-matrix) format. 66,67 Consequently, the computational cost of matrix-vector products involving Q is reduced from (n 2 ) to (n log n), with n the number of discretization points. The -matrix technique is a hierarchical division of a given matrix into rectangular blocks and further approximation of these blocks by low-rank matrices.…”
Section: Computing Karhunen-loéve Approximationmentioning
confidence: 99%
“…After suitable discretization, e.g., by a finite element discretization, the generalized eigenvalue problem (3.18) results in an algebraic eigenproblem of the form Aψ = λBψ with A, B ∈ R n×n and ψ ∈ R n . Algorithm 2 summarizes the so-called double pass randomized algorithm to solve the algebraic generalized eigenvalue problem (see [42,70] for details of the algorithms and [76] for its implementation).…”
Section: Computation Of the Gradient And Hessian Of The Control Objecmentioning
confidence: 99%
“…The mean and variance expressions of the (linear and quadratic) Taylor expansions involve the trace of the covariance-preconditioned Hessian of the control objective with respect to the uncertain parameter. Randomized algorithms for solution of generalized eigenvalue problems [76,70,69] allow for accurate and efficient approximation of this trace and only require computing the action of the covariancepreconditioned Hessian on a number of random directions that depends on its numerical rank. As we showed in the numerical results, this approach is more efficient and accurate than the Gaussian trace estimator when the eigenvalues of the covariancepreconditioned Hessian exhibit fast decay, which is true when the control objective is only sensitive to a limited number of directions in the uncertain parameter space.…”
Section: 2mentioning
confidence: 99%
“…Such ridge approximation is obtained by minimizing an upper bound on the Kullback-Leibler distance between the posterior distribution and its approximation.In this paper, we propose dimension reduction directly based on partial (generalized) spectral decomposition of the prior covariance or the covariance of local Gaussian approximation to the posterior (GAP). The intrinsic low-dimensional subspace is identified by r leading eigen-functions, which can be efficiently obtained by randomized linear algebraic algorithms [15][16][17]. Unlike [8], the posterior covariance projected in the subspace is not empirically updated, but rather approximated in a diagonal form which can still capture the most variation of the projected posterior.…”
mentioning
confidence: 99%