2020
DOI: 10.1190/geo2019-0517.1
|View full text |Cite
|
Sign up to set email alerts
|

Hamiltonian Monte Carlo algorithms for target- and interval-oriented amplitude versus angle inversions

Abstract: A reliable assessment of the posterior uncertainties is a crucial aspect of any amplitude versus angle (AVA) inversion due to the severe ill-conditioning of this inverse problem. To accomplish this task, numerical Markov chain Monte Carlo algorithms are usually used when the forward operator is nonlinear. The downside of these algorithms is the considerable number of samples needed to attain stable posterior estimations especially in high-dimensional spaces. To overcome this issue, we assessed the suitability … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
18
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
1

Relationship

5
2

Authors

Journals

citations
Cited by 17 publications
(18 citation statements)
references
References 48 publications
0
18
0
Order By: Relevance
“…Another viable strategy to reduce the burn‐in period could be starting the MCMC sampling from the model predicted by a local inversion. More advanced MCMC algorithms that incorporate the principles of Hamiltonian dynamics into the standard Metropolis–Hasting method (Betancourt, 2017; Fichtner et al ., 2019; Gebraad et al ., 2020; Aleardi et al ., 2020; Aleardi and Salusti, 2020b) could be useful to speed up the probabilistic ERT inversion. The major computational requirement of the Hamiltonian Monte Carlo algorithm is the need for computing the derivative (i.e.…”
Section: Discussionmentioning
confidence: 99%
“…Another viable strategy to reduce the burn‐in period could be starting the MCMC sampling from the model predicted by a local inversion. More advanced MCMC algorithms that incorporate the principles of Hamiltonian dynamics into the standard Metropolis–Hasting method (Betancourt, 2017; Fichtner et al ., 2019; Gebraad et al ., 2020; Aleardi et al ., 2020; Aleardi and Salusti, 2020b) could be useful to speed up the probabilistic ERT inversion. The major computational requirement of the Hamiltonian Monte Carlo algorithm is the need for computing the derivative (i.e.…”
Section: Discussionmentioning
confidence: 99%
“…slower convergences towards the stationary regime and a stable posterior, and an increase of the correlation between successively sampled models). For this reason, the applicability of this strategy should be evaluated case by case (Aleardi and Salusti, 2020). On the other hand, the FD approach could be computationally prohibitive in the case of hundreds of parameters to be inferred from the data, even though a DCT compression of the elastic model space could be useful to partially reduce the overall computational effort (Aleardi, 2020).…”
Section: Methodsmentioning
confidence: 99%
“…In addition, the sampling ability of MCMC algorithms severely decreases in high‐dimensional model spaces due to the so‐called curse of dimensionality problem (Curtis and Lomax, 2001). Over the last decades, many MCMC approaches have been proposed to mitigate these issues, and in the very last years, the Hamiltonian Monte Carlo (HMC; Duane et al ., 1987) approach has also been used to solve geophysical inversions (Sen and Biswas, 2017; Fichtner and Simutè, 2018; Fichtner and Zunino, 2019; Fichtner et al ., 2019; Aleardi and Salusti, 2020; Gebraad et al ., 2020). Different from classical MCMC methods, this approach exploits the derivative information of the misfit function (the negative natural logarithm of the posterior) to explore the model space and to rapidly converge towards a stable posterior.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…As an alternative, gradient‐based MCMC (GB‐MCMC) (e.g. Hamiltonian Monte Carlo, Langevin Monte Carlo; Sen and Biswas; 2017; Fichtner and Simutè, 2018; Fichtner and Zunino, 2019; Fichtner et al ., 2019; Aleardi, 2020a; Aleardi and Salusti, 2020; Gebrad et al ., 2020) exploits the gradient information of the misfit function (the negative natural logarithm of the posterior) to efficiently explore the model space and to rapidly converge towards stable posterior uncertainties (MacKay, 2003; Neal, 2011). The main computational requirement of these methods is the need for computing derivatives, although this information is highly beneficial to speeding up the convergence of the sampling and to guarantee high independence of the samples while maintaining high acceptance rates.…”
Section: Introductionmentioning
confidence: 99%