2011
DOI: 10.1111/j.1467-9868.2010.00765.x
|View full text |Cite
|
Sign up to set email alerts
|

Riemann Manifold Langevin and Hamiltonian Monte Carlo Methods

Abstract: The paper proposes Metropolis adjusted Langevin and Hamiltonian Monte Carlo sampling methods defined on the Riemann manifold to resolve the shortcomings of existing Monte Carlo algorithms when sampling from target densities that may be high dimensional and exhibit strong correlations. The methods provide fully automated adaptation mechanisms that circumvent the costly pilot runs that are required to tune proposal densities for MetropolisHastings or indeed Hamiltonian Monte Carlo and Metropolis adjusted Langevi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

3
1,742
1
1

Year Published

2011
2011
2022
2022

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 1,262 publications
(1,768 citation statements)
references
References 180 publications
3
1,742
1
1
Order By: Relevance
“…One can also use the second-order variational equations in connection with a Markov Chain Monte Carlo (MCMC) method. Specifically, for both Riemann Manifold Langevin and Hamiltonian Monte Carlo methods, higher order derivatives, and therefore higher order variational equations, are essential (Girolami & Calderhead 2011). A full discussion of these MCMC methods and their application goes beyond the scope of this paper but we note that our initial tests of these methods show great promise.…”
mentioning
confidence: 99%
“…One can also use the second-order variational equations in connection with a Markov Chain Monte Carlo (MCMC) method. Specifically, for both Riemann Manifold Langevin and Hamiltonian Monte Carlo methods, higher order derivatives, and therefore higher order variational equations, are essential (Girolami & Calderhead 2011). A full discussion of these MCMC methods and their application goes beyond the scope of this paper but we note that our initial tests of these methods show great promise.…”
mentioning
confidence: 99%
“…In addition to slice sampler and ARS, the current version of MfUSampler (1.0.4) contains the adaptive rejection Metropolis sampler (Gilks, Best, and Tan 1995) and the univariate Metropolis sampler with Gaussian proposal. Univariate samplers have their limits: When the posterior distribution exhibits a strong correlation structure, one-coordinate-ata-time algorithms can become inefficient as they fail to capture important geometry of the space (Girolami and Calderhead 2011). This has been a key motivation for research on blackbox multivariate samplers, such as adaptations of the slice sampler (Thompson 2011) or the no-U-turn sampler (Hoffman and Gelman 2014).…”
Section: Introductionmentioning
confidence: 99%
“…Another flavor of MH is the t-walk algorithm (Christen and Fox 2010) which uses a set of scale-invariant proposal distributions to co-evolve two points in the state space. Hamiltonian Monte Carlo (HMC) algorithms (Girolami and Calderhead 2011;Neal 2011) have also gained popularity due to emerging techniques for their automated tuning (Hoffman and Gelman 2014).…”
Section: Introductionmentioning
confidence: 99%
“…It has been noted many times that ODE models of biochemical networks generally exhibit widely varying parameter sensitivities [24][25][26][27][28]; investigation of second-order sensitivities of these models, evaluated at the maximum likelihood, often reveals a wide eigenvalue spectrum that itself may change depending on the point in parameter space at which it is calculated. In settings with such varying parameter scalings, standard Markov chain Monte Carlo (MCMC) samplers generally have very poor mixing properties and produce highly correlated samples [23], resulting in estimates of the required Bayesian quantities with large Monte Carlo errors. This is often a result of structural unidentifiability of the model [20], such that parameters cannot be estimated with low variance.…”
Section: Introductionmentioning
confidence: 99%
“…The likelihood of these models can be expensive to evaluate, since it involves approximately solving the system of ODEs with a numerical integration scheme for each set of proposed parameters. Although generally unavoidable, this cost can be minimized through efficient exploration of the parameter space, which can be measured in terms of effective sample size (ESS), normalized by the overall computational time [23].…”
Section: Introductionmentioning
confidence: 99%