2018
DOI: 10.48550/arxiv.1805.07179
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Markov Chain Importance Sampling -- a highly efficient estimator for MCMC

Abstract: Markov chain algorithms are ubiquitous in machine learning and statistics and many other disciplines. In this work we present a novel estimator applicable to several classes of Markov chains, dubbed Markov chain importance sampling (MCIS). For a broad class of Metropolis-Hastings algorithms, MCIS efficiently makes use of rejected proposals. For discretized Langevin diffusions, it provides a novel way of correcting the discretization error. Our estimator satisfies a central limit theorem and improves on error p… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(7 citation statements)
references
References 25 publications
0
7
0
Order By: Relevance
“…Namely, as in the proof of Theorem 4, we find that (Θ k , Y k , Θk+1 , Y k+1 ) k≥1 is a Harris recurrent Markov chain with invariant distribution πδ (θ, y, θ, ỹ) = πδ (θ, y)q(θ, y; θ, ỹ), and π (θ, y, θ, ỹ)/π δ (θ, y, θ, ỹ) = c w δ, (y), where q(θ, y; θ , y ) = q(θ, θ )g(y | θ ). Therefore, E WR δ, (f ) is a strongly consistent estimator of (Rudolf and Sprungk, 2018;Schuster and Klebanov, 2018) for alternative waste recycling estimators based on importance sampling analogues.…”
Section: Supplement D Details Of Extensions In Sectionmentioning
confidence: 95%
“…Namely, as in the proof of Theorem 4, we find that (Θ k , Y k , Θk+1 , Y k+1 ) k≥1 is a Harris recurrent Markov chain with invariant distribution πδ (θ, y, θ, ỹ) = πδ (θ, y)q(θ, y; θ, ỹ), and π (θ, y, θ, ỹ)/π δ (θ, y, θ, ỹ) = c w δ, (y), where q(θ, y; θ , y ) = q(θ, θ )g(y | θ ). Therefore, E WR δ, (f ) is a strongly consistent estimator of (Rudolf and Sprungk, 2018;Schuster and Klebanov, 2018) for alternative waste recycling estimators based on importance sampling analogues.…”
Section: Supplement D Details Of Extensions In Sectionmentioning
confidence: 95%
“…In the so-called layered adaptive importance sampling (LAIS) algorithm [33] and similar methods [46], an MCMC algorithm is used for obtaining a set of mean parameters {µ 1 , ..., µ T }. Then, one sample x t is drawn from a proposal density with mean µ t , i.e., x t ∼ q(x t |µ t , C) where C is a covariance matrix and t = 1, ...., T .…”
Section: Application To Adaptive Importance Samplingmentioning
confidence: 99%
“…where a temporal mixture is used in the denominator [33], [46]. With this choice, very good performance can be obtained, but the computational cost of evaluating the weight denominator increases with T 2 [33].…”
Section: Application To Adaptive Importance Samplingmentioning
confidence: 99%
See 1 more Smart Citation
“…Special case with recycling samples. The method in [84] can be considered as a special case of LAIS when N = 1, and {µ t = x t } i.e., all the samples {x t } T t=1 are generated by the unique MCMC chain with random walk proposal ϕ(x|x t−1 ) = q(x|x t−1 ) with invariant density π(x). In this scenario, the two layers of LAIS are collapsed in a unique layer, so that {µ t = x t }.…”
Section: Layered Adaptive Importance Sampling (Lais)mentioning
confidence: 99%