2018
DOI: 10.48550/arxiv.1811.05073
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Regularised Zero-Variance Control Variates for High-Dimensional Variance Reduction

Abstract: Zero-variance control variates (ZV-CV) are a post-processing method to reduce the variance of Monte Carlo estimators of expectations using the derivatives of the log target. Once the derivatives are available, the only additional computational effort is solving a linear regression problem. Significant variance reductions have been achieved with this method in low dimensional examples, but the number of covariates in the regression rapidly increases with the dimension of the target. We propose to exploit penali… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
26
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
2

Relationship

2
4

Authors

Journals

citations
Cited by 10 publications
(31 citation statements)
references
References 48 publications
(101 reference statements)
0
26
0
Order By: Relevance
“…This section focused on the application of gradient-based control variates to approximate an integral of interest based on output from MCMC. Applications to other sampling algorithms, such as population MCMC (Oates et al 2016), stochastic gradient Langevin dynamics (Baker et al 2019), sequential Monte Carlo (South et al 2018) and unbiased MCMC with couplings (South et al 2019), have also been considered and much of our discussion applies unchanged. A current weakness of control variate methodology is that it is under-developed from a theoretical perspective; our focus was on sets of control variates that form linear subspaces of L 2 (P ), for which some limited theoretical understanding has been achieved, but more sophisticated sets of control variates have also been empirically considered.…”
Section: Discussionmentioning
confidence: 99%
See 3 more Smart Citations

Post-Processing of MCMC

South,
Riabiz,
Teymur
et al. 2021
Preprint
Self Cite
“…This section focused on the application of gradient-based control variates to approximate an integral of interest based on output from MCMC. Applications to other sampling algorithms, such as population MCMC (Oates et al 2016), stochastic gradient Langevin dynamics (Baker et al 2019), sequential Monte Carlo (South et al 2018) and unbiased MCMC with couplings (South et al 2019), have also been considered and much of our discussion applies unchanged. A current weakness of control variate methodology is that it is under-developed from a theoretical perspective; our focus was on sets of control variates that form linear subspaces of L 2 (P ), for which some limited theoretical understanding has been achieved, but more sophisticated sets of control variates have also been empirically considered.…”
Section: Discussionmentioning
confidence: 99%
“…This limits the variance reduction that can be achieved. To improve convergence rates, one could consider increasing the size of Φ with increasing M , in the spirit of Portier & Segers (2018), South et al (2018), or using an infinite-dimensional basis with regularisation, as described next.…”
Section: Gradient-based Control Variatesmentioning
confidence: 99%
See 2 more Smart Citations

Post-Processing of MCMC

South,
Riabiz,
Teymur
et al. 2021
Preprint
Self Cite
“…The development of non-parametric approaches to the choice of g m has to-date focused on kernel methods (Oates et al, 2017;Barp et al, 2018), piecewise constant approximations (Mijatović et al, 2018) and non-linear approximations based on selecting basis functions from a dictionary (Belomestny et al, 2017;South et al, 2019). Non-parametric approaches can be motivated using the "double descent" phenomena recently exposed in Belkin et al (2019), where a regression model based on a large number of features which are sufficiently regularized can, in some circumstances, achieve better predictive performance compared to regression models with a smaller number of predictors selected based on a bias-variance trade-off.…”
Section: Introductionmentioning
confidence: 99%