We address the problem of upper bounding the mean square error of MCMC
estimators. Our analysis is nonasymptotic. We first establish a general result
valid for essentially all ergodic Markov chains encountered in Bayesian
computation and a possibly unbounded target function $f$. The bound is sharp in
the sense that the leading term is exactly $\sigma_{\mathrm {as}}^2(P,f)/n$,
where $\sigma_{\mathrm{as}}^2(P,f)$ is the CLT asymptotic variance. Next, we
proceed to specific additional assumptions and give explicit computable bounds
for geometrically and polynomially ergodic Markov chains under quantitative
drift conditions. As a corollary, we provide results on confidence estimation.Comment: Published in at http://dx.doi.org/10.3150/12-BEJ442 the Bernoulli
(http://isi.cbs.nl/bernoulli/) by the International Statistical
Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm). arXiv admin
note: text overlap with arXiv:0907.491
We assume a drift condition towards a small set and bound the mean square
error of estimators obtained by taking averages along a single trajectory of a
Markov chain Monte Carlo algorithm. We use these bounds to construct
fixed-width nonasymptotic confidence intervals. For a possibly unbounded
function $f:\stany \to R,$ let $I=\int_{\stany} f(x) \pi(x) dx$ be the value of
interest and $\hat{I}_{t,n}=(1/n)\sum_{i=t}^{t+n-1}f(X_i)$ its MCMC estimate.
Precisely, we derive lower bounds for the length of the trajectory $n$ and
burn-in time $t$ which ensure that $$P(|\hat{I}_{t,n}-I|\leq \varepsilon)\geq
1-\alpha.$$ The bounds depend only and explicitly on drift parameters, on the
$V-$norm of $f,$ where $V$ is the drift function and on precision and
confidence parameters $\varepsilon, \alpha.$ Next we analyse an MCMC estimator
based on the median of multiple shorter runs that allows for sharper bounds for
the required total simulation cost. In particular the methodology can be
applied for computing Bayesian estimators in practically relevant models. We
illustrate our bounds numerically in a simple example
The standard Markov chain Monte Carlo method of estimating an expected value is to generate a Markov chain which converges to the target distribution and then compute correlated sample averages. In many applications the quantity of interest θ is represented as a product of expected values, θ = µ 1 · · · µ k , and a natural estimator is a product of averages. To increase the confidence level, we can compute a median of independent runs. The goal of this paper is to analyze such an estimatorθ, i.e. an estimator which is a 'median of products of averages' (MPA). Sufficient conditions are given forθ to have fixed relative precision at a given level of confidence, that is, to satisfy P(|θ − θ| ≤ θε) ≥ 1 − α. Our main tool is a new bound on the mean-square error, valid also for nonreversible Markov chains on a finite state space.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.