2018
DOI: 10.1007/s11222-018-9802-x
|View full text |Cite
|
Sign up to set email alerts
|

Irreversible samplers from jump and continuous Markov processes

Abstract: In this paper, we propose irreversible versions of the Metropolis Hastings (MH) and Metropolis adjusted Langevin algorithm (MALA) with a main focus on the latter. For the former, we show how one can simply switch between different proposal and acceptance distributions upon rejection to obtain an irreversible jump sampler (I-Jump). The resulting algorithm has a simple implementation akin to MH, but with the demonstrated benefits of irreversibility. We then show how the previously proposed MALA method can also b… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
38
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 21 publications
(38 citation statements)
references
References 69 publications
(162 reference statements)
0
38
0
Order By: Relevance
“…Like with the gradients, one can employ a proxy model to calculate an approximate FI matrix. The MCMC approaches using the curvature information are known in the literature [15,24,25,28], but are beyond the scope of this paper.…”
Section: Discussionmentioning
confidence: 99%
“…Like with the gradients, one can employ a proxy model to calculate an approximate FI matrix. The MCMC approaches using the curvature information are known in the literature [15,24,25,28], but are beyond the scope of this paper.…”
Section: Discussionmentioning
confidence: 99%
“…We next demonstrate the performance of boosting with the FKL on a distribution with a large number of well-separated modes. We set as the target distribution a 2-dimensional Gaussian mixture model (GMM) with 20 components, previously used by Ma et al [2019].…”
Section: Simulation 2: Well-separated Modesmentioning
confidence: 99%
“…Figure 4: Log residual (log p/q k ) plots of for FKL boosting on a 2-dimensional GMM of 20 components [Ma et al, 2019].…”
Section: Real Data Experimentsmentioning
confidence: 99%
“…To compare efficiency of the algorithms we compute a normalised effective sample size (nESS), where the normalisation is by the number of samples N . Following [6], we define the Effective Sample Size ESS = N/τ int where N is the number of steps of the chain (after appropriate burn-in) and τ int is the integrated autocorrelation time, τ int := 1 + k γ(k), where γ(k) is the lag − k autocorrelation. Consistently, the normalised ESS, nESS, is just nESS := ESS/N .…”
Section: Numerics: Sampling From Measures Supported On Bounded Domainsmentioning
confidence: 99%
“…Notice that nESS can be bigger than one (when the samples are negatively correlated), and this is something that will appear in our simulations. As an estimator for τ int we will take the Bartlett window estimator (see for example [6,Section 6], and references therein) rather than the initial monotone sequence estimator (see again [6,Section 6]), as the former is more suited to include non-reversible chains. Since the nESS is itself a random quantity, we performed 10 runs of each case using different seeds, and our plots below show P10, P50, P90 percentiles of the nESS from these runs.…”
Section: Numerics: Sampling From Measures Supported On Bounded Domainsmentioning
confidence: 99%