2017
DOI: 10.1007/s00211-017-0880-z
|View full text |Cite
|
Sign up to set email alerts
|

Randomized matrix-free trace and log-determinant estimators

Abstract: We present randomized algorithms for estimating the trace and determinant of Hermitian positive semi-definite matrices. The algorithms are based on subspace iteration, and access the matrix only through matrix vector products. We analyse the error due to randomization, for starting guesses whose elements are Gaussian or Rademacher random variables. The analysis is cleanly separated into a structural (deterministic) part followed by a probabilistic part. Our absolute bounds for the expectation and concentration… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

1
91
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
8

Relationship

1
7

Authors

Journals

citations
Cited by 57 publications
(93 citation statements)
references
References 49 publications
(79 reference statements)
1
91
0
Order By: Relevance
“…As shown in Ref. 70, the trace error resulting from the subspace iteration method approaches the sum of neglected small eigenvalues exponentially fast with the number of iterations, which makes valid our error bound of Eq. 24.…”
Section: Effective Basis For the Rpa Energysupporting
confidence: 73%
“…As shown in Ref. 70, the trace error resulting from the subspace iteration method approaches the sum of neglected small eigenvalues exponentially fast with the number of iterations, which makes valid our error bound of Eq. 24.…”
Section: Effective Basis For the Rpa Energysupporting
confidence: 73%
“…There are multiple advantages in using Algorithm 2 to approximate the trace, see e.g., [42,69]. In terms of accuracy, the approximation error is bounded by the sum of the remaining eigenvalues, so that the error is small if the eigenvalues decay fast or if the Hessian H has low rank, see [42,69] for more details. In terms of computational efficiency, the 2(k + p) Hessian matrix-vector products, which entail the solution of a pair of linearized state/adjoint equations (as shown in Sec.…”
Section: Computation Of the Gradient And Hessian Of The Control Objecmentioning
confidence: 99%
“…The mean and variance expressions of the (linear and quadratic) Taylor expansions involve the trace of the covariance-preconditioned Hessian of the control objective with respect to the uncertain parameter. Randomized algorithms for solution of generalized eigenvalue problems [76,70,69] allow for accurate and efficient approximation of this trace and only require computing the action of the covariancepreconditioned Hessian on a number of random directions that depends on its numerical rank. As we showed in the numerical results, this approach is more efficient and accurate than the Gaussian trace estimator when the eigenvalues of the covariancepreconditioned Hessian exhibit fast decay, which is true when the control objective is only sensitive to a limited number of directions in the uncertain parameter space.…”
Section: 2mentioning
confidence: 99%
“…In [44], the authors propose an alternate design criterion given by a lower bound of the expected information gain, and use PC representation of the forward model to accelerate objective function evaluations. However, the approaches based on PC representations remain limited in scope to problems with low to moderate parameter dimensions (e.g., parameter dimensions in order of tens).Efficient estimators for the evaluation of D-optimal criterion were developed in [38,39]; however, these works do not discuss the problem of computing D-optimal designs. The mathematical formulation of Bayesian D-optimality for infinite-dimensional Bayesian linear inverse problems was established in [2].…”
mentioning
confidence: 99%
“…Efficient estimators for the evaluation of D-optimal criterion were developed in [38,39]; however, these works do not discuss the problem of computing D-optimal designs. The mathematical formulation of Bayesian D-optimality for infinite-dimensional Bayesian linear inverse problems was established in [2].…”
mentioning
confidence: 99%