2015 Winter Simulation Conference (WSC) 2015
DOI: 10.1109/wsc.2015.7408524
|View full text |Cite
|
Sign up to set email alerts
|

Unbiased Monte Carlo for optimization and functions of expectations via multi-level randomization

Abstract: We present general principles for the design and analysis of unbiased Monte Carlo estimators for quantities such as α = g (E (X)), where E (X) denotes the expectation of a (possibly multidimensional) random variable X, and g (·) is a given deterministic function. Our estimators possess finite work-normalized variance under mild regularity conditions such as local twice differentiability of g (·) and suitable growth and finite-moment assumptions. We apply our estimator to various settings of interest, such as o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
54
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 25 publications
(58 citation statements)
references
References 16 publications
0
54
0
Order By: Relevance
“…The procedure developed in [4] proceeds as follows. First, define for a given h (W ), and n ≥ 0, the average over odd and even labels to bē…”
Section: Let Us Definementioning
confidence: 99%
“…The procedure developed in [4] proceeds as follows. First, define for a given h (W ), and n ≥ 0, the average over odd and even labels to bē…”
Section: Let Us Definementioning
confidence: 99%
“…Multilevel Monte-Carlo (MLMC) techniques originate from the literature on parametric integration for solving integral and differential equations [24]. Our approach is based on an MLMC variant put forth by Blanchet and Glynn [8] for estimating functionals of expectations. Among several applications, they propose [8, Section 5.2] an estimator for argmin x E S∼P f (x; S) where f (•; s) is convex for all s and assuming access to minimizers of empirical objectives of the form i∈[N ] f (x; s i ).…”
Section: Related Workmentioning
confidence: 99%
“…Our estimator is an instance of the multilevel Monte Carlo technique for de-biasing estimator sequences [24] and more specifically the method of Blanchet and Glynn [8]. Our key observation is that this method is readily applicable to strongly-convex variants of SGD, or indeed any stochastic optimization method with the same (optimal) rate of convergence.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…The difference is that with EVPI all of the underlying random variables X and Y are inner variables; non are outer variables leading to a conditional expectation. Such an MLMC estimator for the maximum of an unconditional expectation has been introduced by Blanchet and Glynn (2015). As discussed in the introduction, however, EVPI can be estimated with O(ε 2 ) complexity by using standard Monte Carlo methods already, so that the benefit is that one could use a randomisation technique by Rhee and Glynn (2015) to obtain an unbiased estimator, which might be marginal in the current setting.…”
Section: Mlmc Estimator For Evppimentioning
confidence: 99%