2019
DOI: 10.48550/arxiv.1905.03673
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Stein Point Markov Chain Monte Carlo

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

1
32
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(33 citation statements)
references
References 0 publications
1
32
0
Order By: Relevance
“…Several MCMC thinning procedures have been developed based upon RKHS embedding: Offline methods such as Stein Thinning andd MMD thinning (Riabiz et al, 2020;Teymur et al, 2020) take a full chain S of MCMC samples as input and iteratively build a subset D by greedy KSD/MMD minimization. The online method Stein Point MCMC (SPMCMC) (Chen et al, 2019a) selects the optimal sample from a batch of m samples during MCMC sampling: at each step it adds the best of m points to a D. Doing so mitigates both the aforementioned redundancy and representational complexity issues; however (Chen et al, 2019a) only append new points to the existing empirical measure estimates, which may still retain too many redundant points. A similar line of research in Gaussian Processes (Williams & Rasmussen, 2006) and kernel regression (Hofmann et al, 2008) reduces the complexity of a nonparametric distributional representation through offline point selection rules such as Nyström sampling (Williams & Seeger, 2001), greedy forward selection (Seeger et al, 2003;Wang et al, 2012), or inducing inputs (Snelson & Ghahramani, 2005).…”
Section: Related Workmentioning
confidence: 99%
See 4 more Smart Citations
“…Several MCMC thinning procedures have been developed based upon RKHS embedding: Offline methods such as Stein Thinning andd MMD thinning (Riabiz et al, 2020;Teymur et al, 2020) take a full chain S of MCMC samples as input and iteratively build a subset D by greedy KSD/MMD minimization. The online method Stein Point MCMC (SPMCMC) (Chen et al, 2019a) selects the optimal sample from a batch of m samples during MCMC sampling: at each step it adds the best of m points to a D. Doing so mitigates both the aforementioned redundancy and representational complexity issues; however (Chen et al, 2019a) only append new points to the existing empirical measure estimates, which may still retain too many redundant points. A similar line of research in Gaussian Processes (Williams & Rasmussen, 2006) and kernel regression (Hofmann et al, 2008) reduces the complexity of a nonparametric distributional representation through offline point selection rules such as Nyström sampling (Williams & Seeger, 2001), greedy forward selection (Seeger et al, 2003;Wang et al, 2012), or inducing inputs (Snelson & Ghahramani, 2005).…”
Section: Related Workmentioning
confidence: 99%
“…Most similar to our work is a variant of Stein Point MCMC proposed in (Chen et al, 2019a) [Appendix A.6.5] which develops a non-adaptive add/drop criterion where a pre-fixed number of points are dropped at each stage and the dictionary size grows linearly with the time step. By contrast, our work develops a flexible and adaptive scheme that automatically determines the number of points to drop and the dictionary size grows sub-linearly with the time step.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations