2015
DOI: 10.1007/s11222-015-9574-5
|View full text |Cite
|
Sign up to set email alerts
|

Bayesian computation: a summary of the current state, and samples backwards and forwards

Abstract: Recent decades have seen enormous improvements in computational inference for statistical models; there have been competitive continual enhancements in a wide range of computational tools. In Bayesian inference, first and foremost, MCMC techniques have continued to evolve, moving from random walk proposals to Langevin drift, to Hamiltonian Monte Carlo, and so on, with both theoretical and algorithmic innovations opening new opportunities to practitioners. However, this impressive evolution in capacity is confr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
133
0
2

Year Published

2015
2015
2021
2021

Publication Types

Select...
5
3
2

Relationship

1
9

Authors

Journals

citations
Cited by 164 publications
(138 citation statements)
references
References 236 publications
0
133
0
2
Order By: Relevance
“…As discussed in [48], some asymptotic analysis of these algorithms shows that, in the stationary regime, the random-walk As for the MALA, the authors in [41] proposed a generalization of this HMC algorithm by considering Hamiltonian dynamics on a manifold in order to be able to take into account the local structure of the target distribution. The…”
Section: B On Hamiltonian Based Mcmc Kernelmentioning
confidence: 99%
“…As discussed in [48], some asymptotic analysis of these algorithms shows that, in the stationary regime, the random-walk As for the MALA, the authors in [41] proposed a generalization of this HMC algorithm by considering Hamiltonian dynamics on a manifold in order to be able to take into account the local structure of the target distribution. The…”
Section: B On Hamiltonian Based Mcmc Kernelmentioning
confidence: 99%
“…However, these methods often struggle to detect multiple divergence times across pairs of populations (Oaks et al, 2013 or have little information to update a priori expectations (Oaks, 2014). More fundamentally, the loss of information inherent to ABC approaches can prevent them from discriminating among models (Robert et al, 2011;Marin et al, 2013;Green et al, 2015). Figure 1.…”
Section: Introductionmentioning
confidence: 99%
“…A main computational advantage of the MAP estimator (3) is that it can be computed very efficiently, even in high dimensions, by using convex optimisation algorithms (e.g. [10], [9], [19]). However, since MAP estimation results in a single point estimator, we lose uncertainty information that sampling approaches like MCMC methods can provide [5].…”
Section: Problem Formulationmentioning
confidence: 99%