2018
DOI: 10.48550/arxiv.1810.04449
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Faster Hamiltonian Monte Carlo by Learning Leapfrog Scale

Abstract: Hamiltonian Monte Carlo samplers have become standard algorithms for MCMC implementations, as opposed to more basic versions, but they still require some amount of tuning and calibration. Exploiting the U-turn criterion of the NUTS algorithm (Hoffman and Gelman, 2014), we propose a version of HMC that relies on the distribution of the integration time of the associated leapfrog integrator. Using in addition the primal-dual averaging method for tuning the step size of the integrator, we achieve an essentially c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
5
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(6 citation statements)
references
References 8 publications
(9 reference statements)
1
5
0
Order By: Relevance
“…We see that manually-optimized Hamiltonian zigzag delivers substantial increases in ess compared to Zigzag-Nuts. Such efficiency gains are also observed by the authors who proposed alternative methods for tuning hmc (Wang et al, 2013;Wu et al, 2018). The results here indicate that their tuning approaches may be worthy alternatives to Zigzag-Nuts and may further increase Hamiltonian zigzag's overall advantage over Markovian zigzag.…”
Section: Simulation Set-up and Efficiency Metricssupporting
confidence: 78%
“…We see that manually-optimized Hamiltonian zigzag delivers substantial increases in ess compared to Zigzag-Nuts. Such efficiency gains are also observed by the authors who proposed alternative methods for tuning hmc (Wang et al, 2013;Wu et al, 2018). The results here indicate that their tuning approaches may be worthy alternatives to Zigzag-Nuts and may further increase Hamiltonian zigzag's overall advantage over Markovian zigzag.…”
Section: Simulation Set-up and Efficiency Metricssupporting
confidence: 78%
“…A popular approach suggested in [32] tunes L based on the ESJD by doubling L until the path makes a U-turn and retraces back towards the starting point, that is by stopping to increase L when the distance to the proposed state reaches a stationary point [4]; see also [57] for a variation and [48] for a version using sequential proposals. Modern probabilistic programming languages such as Stan [12], PyMC3 [51], Turing [23,58] or TFP [39] furthermore allow for an adaptation of a diagonal or dense mass-matrix within NUTS based on the sample covariance matrix.…”
Section: Related Workmentioning
confidence: 99%
“…We consider a stochastic volatility model [36,34] that has been used with minor variations for adapting HMC [25,32,57]. Assume that the latent log-volatilities follow an autoregressive AR(1) process so that h 1 ∼ N (0, σ 2 /(1 − φ 2 )) and for t ∈ {1, .…”
Section: Stochastic Volatility Modelmentioning
confidence: 99%
“…Therefore, NUTS does not require tuning for T during the warm-up phase. Following (Wu et al, 2018), we choose time of integration T to be the 90th percentile of the trajectories followed by NUTS. 10…”
Section: Choice Of Parametersmentioning
confidence: 99%