2019
DOI: 10.1214/18-aos1715
|View full text |Cite
|
Sign up to set email alerts
|

The Zig-Zag process and super-efficient sampling for Bayesian analysis of big data

Abstract: Standard MCMC methods can scale poorly to big data settings due to the need to evaluate the likelihood at each iteration. There have been a number of approximate MCMC algorithms that use sub-sampling ideas to reduce this computational burden, but with the drawback that these algorithms no longer target the true posterior distribution. We introduce a new family of Monte Carlo methods based upon a multi-dimensional version of the Zig-Zag process of Bierkens and Roberts (2017), a continuous time piecewise determi… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
302
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
7
2

Relationship

1
8

Authors

Journals

citations
Cited by 194 publications
(308 citation statements)
references
References 41 publications
1
302
0
Order By: Relevance
“…A number of recently developed methods require only unbiased estimators of (9). These include the stochastic gradient Langevin dynamics (SGLD) [49], the zig-zag sampler [4], or the bouncy particle sampler [6,39]. We propose to construct an unbiased estimator of (9) using the debiasing technique from the recent works [32,42], and then use this unbiased estimator within such MCMC methods.…”
Section: Mathematical Setup and Summary Of Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…A number of recently developed methods require only unbiased estimators of (9). These include the stochastic gradient Langevin dynamics (SGLD) [49], the zig-zag sampler [4], or the bouncy particle sampler [6,39]. We propose to construct an unbiased estimator of (9) using the debiasing technique from the recent works [32,42], and then use this unbiased estimator within such MCMC methods.…”
Section: Mathematical Setup and Summary Of Resultsmentioning
confidence: 99%
“…Debiasing. Some MCMC methods, such as SGLD [49], as well as some recently developed piecewise-deterministic Markov processes (PMDP) [12,4,6,39], require only an unbiased estimator of the gradient of the log-likelihood for implementation. Others, such as pseudo-marginal MCMC, require an unbiased and non-negative estimator of the likelihood itself.…”
Section: 1mentioning
confidence: 99%
“…-On a single CPU, in our view the most promising approach with big n and tiny p, when a gradient is available, involves methods based on creating continuous-time stochastic processes without discretization that sample correctly from the posterior of interest-these include ScaLE (Pollock et al, 2016), ZigZag sampling (Bierkens, Fearnhead and Roberts, 2016), and the Bouncy Particle Sampler (BPS, Bouchard-Côté, Vollmer and Doucet, 2015). The key advantage of these methods is the ability to run while evaluating one data point at a time.…”
Section: Asynchronousmentioning
confidence: 99%
“…The zigzag process can be defined in multiple dimensions [5] but here we consider only one spatial dimension. In this setting the zigzag process is a Markov process (X(t), Θ(t)) in the state space E := R×{−1, +1}.…”
Section: Introductionmentioning
confidence: 99%