2016
DOI: 10.1016/j.jcp.2015.12.032
|View full text |Cite
|
Sign up to set email alerts
|

Emulation of higher-order tensors in manifold Monte Carlo methods for Bayesian Inverse Problems

Abstract: The Bayesian approach to Inverse Problems relies predominantly on Markov Chain Monte Carlo methods for posterior inference. The typical nonlinear concentration of posterior measure observed in many such Inverse Problems presents severe challenges to existing simulation based inference methods. Motivated by these challenges the exploitation of local geometric information in the form of covariant gradients, metric tensors, Levi-Civita connections, and local geodesic flows, have been introduced to more effectivel… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
54
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
8
1

Relationship

2
7

Authors

Journals

citations
Cited by 48 publications
(54 citation statements)
references
References 69 publications
0
54
0
Order By: Relevance
“…Riemann manifold Langevin and Hamiltonian Monte Carlo methods [60,61] exploit the first and second order local structure of the posterior distribution and profit from more efficient gradient evaluation. The same holds for novel emulator-based sampling procedures [62] and approaches for posterior approximation [63]. By exploiting the proposed approach, rigorous Bayesian parameter estimation for models with hundreds of parameters could become a standard tool instead of an exception [64,65].…”
Section: Discussionmentioning
confidence: 85%
“…Riemann manifold Langevin and Hamiltonian Monte Carlo methods [60,61] exploit the first and second order local structure of the posterior distribution and profit from more efficient gradient evaluation. The same holds for novel emulator-based sampling procedures [62] and approaches for posterior approximation [63]. By exploiting the proposed approach, rigorous Bayesian parameter estimation for models with hundreds of parameters could become a standard tool instead of an exception [64,65].…”
Section: Discussionmentioning
confidence: 85%
“…Substituting K n with DR-∞-GMC Markov kernel into the SMC scheme can parallelize these MCMC algorithms to achieve further efficiency improvement to tackle Bayesian inverse problems at larger scale. DR-∞-mHMC can be improved further by e.g., surrogate methods [42,43] or grid methods [44] to reduce the burden of point-wise updating gradient and metric. Within the leap-frog steps of 'HMC' type algorithms, one can consider 'BFGS' type update as in quasi-Newton methods [45].…”
Section: Conclusion and Discussionmentioning
confidence: 99%
“…The second approach relies on a surrogate function, the gradient of which is less expensive to calculate. [11] [7] used Gaussian process (GP) to produce satisfactory results in lower dimensions. However, training a GP is itself computationally expensive and training points must be chosen with great care.…”
Section: Introductionmentioning
confidence: 99%