2017
DOI: 10.1016/j.cma.2017.08.016
|View full text |Cite
|
Sign up to set email alerts
|

Hessian-based adaptive sparse quadrature for infinite-dimensional Bayesian inverse problems

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
45
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
8
1

Relationship

2
7

Authors

Journals

citations
Cited by 51 publications
(45 citation statements)
references
References 21 publications
0
45
0
Order By: Relevance
“…the resulting posterior measure. For example, the computational burden of expensive forward or likelihood models can be reduced by surrogates or multilevel strategies [14,20,27,34] and for many classical sampling or integration methods such as Quasi-Monte Carlo [12], Markov chain Monte Carlo [6,32,42], and numerical quadrature [5,35] we now know modifications and conditions which ensure a dimension-independent efficiency.…”
Section: Introductionmentioning
confidence: 99%
“…the resulting posterior measure. For example, the computational burden of expensive forward or likelihood models can be reduced by surrogates or multilevel strategies [14,20,27,34] and for many classical sampling or integration methods such as Quasi-Monte Carlo [12], Markov chain Monte Carlo [6,32,42], and numerical quadrature [5,35] we now know modifications and conditions which ensure a dimension-independent efficiency.…”
Section: Introductionmentioning
confidence: 99%
“…In case of the current version of the BayesFactor package, such an option is an alternative algorithm using Laplace approximation. Because this optional algorithm does not use any sampling, it will provide stable results across runs (Schillings, Sprungk, & Wacker, 2020; for alternative algorithms with stable, analytic results, see Chen, Villa, & Ghattas, 2017;Schillings & Schwab, 2013). At the same time, Laplace approximation can return systematically different results than (many iterations of) MCMC-based methods, especially when sample sizes are small.…”
Section: Tablementioning
confidence: 99%
“…Improved bounds are demonstrated in [66] and show that the Gaussian estimator becomes very inefficient if the stable rank of the operator is small and that it allows for small N only if the eigenvalues are all of approximately the same size. This means that the randomized Gaussian estimator is not a viable solution to estimate the trace of H, since it has been observed numerically or proven analytically that for many problems the Hessian operator is either nearly low-rank or its eigenvalues exhibit fast decay [8,36,19,21,20,18,23,30,2,3,1,33,64,46,58,22,24].…”
Section: Computation Of the Gradient And Hessian Of The Control Objecmentioning
confidence: 99%