2020
DOI: 10.1007/978-3-030-46147-8_32
|View full text |Cite
|
Sign up to set email alerts
|

Neural Control Variates for Monte Carlo Variance Reduction

Abstract: In statistics and machine learning, approximation of an intractable integration is often achieved by using the unbiased Monte Carlo estimator, but the variances of the estimation are generally high in many applications. Control variates approaches are well-known to reduce the variance of the estimation. These control variates are typically constructed by employing predefined parametric functions or polynomials, determined by using those samples drawn from the relevant distributions. Instead, we propose to cons… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
16
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 9 publications
(16 citation statements)
references
References 15 publications
0
16
0
Order By: Relevance
“…A current weakness of control variate methodology is that it is under-developed from a theoretical perspective; our focus was on sets of control variates that form linear subspaces of L 2 (P ), for which some limited theoretical understanding has been achieved, but more sophisticated sets of control variates have also been empirically considered. For example, Wan et al (2019), Si et al (2020) proposed to use the gradients of neural network for the set Φ. A neural network is parameterised by a collection of weights and biases, which are jointly estimated using stochastic gradient descent applied to a proxy for mean square error, as discussed in Section 3.1.3.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation

Post-Processing of MCMC

South,
Riabiz,
Teymur
et al. 2021
Preprint
“…A current weakness of control variate methodology is that it is under-developed from a theoretical perspective; our focus was on sets of control variates that form linear subspaces of L 2 (P ), for which some limited theoretical understanding has been achieved, but more sophisticated sets of control variates have also been empirically considered. For example, Wan et al (2019), Si et al (2020) proposed to use the gradients of neural network for the set Φ. A neural network is parameterised by a collection of weights and biases, which are jointly estimated using stochastic gradient descent applied to a proxy for mean square error, as discussed in Section 3.1.3.…”
Section: Discussionmentioning
confidence: 99%
“…However, the regression perspective in Equation ( 19) suggests that, by analogy with high-dimensional regression modelling (Bühlmann & Van De Geer 2011), it may be possible to construct control variates for functions f whose effective dimension is small, despite a high ambient dimension of X . Additional regularisation can be introduced to this effect (South et al 2018, Wan et al 2019, with positive results reported for d ≤ 100. For even larger d, it may be sensible to pursue nonlinear approximation (DeVore 1998), where the basis Φ is restricted to allow dependence only on a subset of the parameters (so-called a priori regularisation in South et al 2018) .…”
Section: Bias-correctingmentioning
confidence: 99%

Post-Processing of MCMC

South,
Riabiz,
Teymur
et al. 2021
Preprint
“…Practical parametric approaches to the choice of g n have been well-studied in the Bayesian context, typically based on polynomial regression models (Assaraf & Caffarel, 1999;Mira et al, 2013;Papamarkou et al, 2014;Oates et al, 2016;Brosse et al, 2019), but neural networks have also been proposed recently (Wan et al, 2019;Si et al, 2020). In particular, existing control variates based on polynomial regression have the attractive property of being semi-exact, meaning that there is a well-characterized set of functions f ∈ F for which f n can be shown to exactly equal f after a finite number of samples n have been obtained.…”
Section: Introductionmentioning
confidence: 99%
“…Practical parametric approaches to the choice of g m have been well-studied in the Bayesian context, typically based on polynomial regression models (Assaraf and Caffarel, 1999;Mira et al, 2013;Papamarkou et al, 2014;Oates et al, 2016;Brosse et al, 2019), but neural networks have also been proposed (Wan et al, 2019). In particular, existing control variates based on polynomial regression have the attractive property of being semi-exact, meaning that there is a well-characterized set of functions f ∈ F for which f m can be shown to exactly equal f after a finite number of data m have been obtained.…”
Section: Introductionmentioning
confidence: 99%