2012
DOI: 10.1137/090773143
|View full text |Cite
|
Sign up to set email alerts
|

Fixed-Width Sequential Stopping Rules for a Class of Stochastic Programs

Abstract: Monte Carlo sampling-based methods are frequently used in stochastic programming when exact solution is not possible. A critical component of Monte Carlo sampling-based methods is determining when to stop sampling to ensure the desired quality of the solutions. In this paper, we develop stopping rules for sequential sampling procedures that depend on the width of an optimality gap confidence interval estimator. The procedures solve a sequence of sampling approximations with increasing sample size to generate s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
27
0

Year Published

2015
2015
2024
2024

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 20 publications
(27 citation statements)
references
References 45 publications
0
27
0
Order By: Relevance
“…We note, however, that MCMC-IS can be paired with many other stochastic optimization algorithms, such as the sample average approximation method ), stochastic decomposition (Higle and Sen (1991)), progressive hedging (Rockafellar and Wets (1991)), augmented Lagrangian methods (Parpas and Rustem (2007)), variants of Benders' decomposition (Birge and Louveaux (2011)), or even approximate dynamic programming (Powell (2007)). More generally, we also expect MCMC-IS to yield similar benefits in sampling-based approaches for developing stopping rules (Bayraksan and Pierre-Louis (2012), Morton (1998)), chance-constrained programming (Barrera et al (2014), Watson et al (2010)), and risk-averse stochastic programming (Kozmık and Morton (2013), Shapiro (2009)). …”
Section: Introductionmentioning
confidence: 93%
“…We note, however, that MCMC-IS can be paired with many other stochastic optimization algorithms, such as the sample average approximation method ), stochastic decomposition (Higle and Sen (1991)), progressive hedging (Rockafellar and Wets (1991)), augmented Lagrangian methods (Parpas and Rustem (2007)), variants of Benders' decomposition (Birge and Louveaux (2011)), or even approximate dynamic programming (Powell (2007)). More generally, we also expect MCMC-IS to yield similar benefits in sampling-based approaches for developing stopping rules (Bayraksan and Pierre-Louis (2012), Morton (1998)), chance-constrained programming (Barrera et al (2014), Watson et al (2010)), and risk-averse stochastic programming (Kozmık and Morton (2013), Shapiro (2009)). …”
Section: Introductionmentioning
confidence: 93%
“…For example, the conditions that allow the transfer of structural properties from the sample-path to the limit function f (x) [33,Propositions 1,3,4]; the sufficient conditions for the consistency of the optimal value and solution of Problem P n assuming the numerical procedure in use within SAA can produce global optima [53,Theorem 5.3]; consistency of the set of stationary points of Problem P n [53,6]; convergence rates for the optimal value [53, Theorem 5.7] and optimal solution [33, Theorem 12]; expressions for the minimum sample size m that provides probabilistic guarantees on the optimality gap of the sample-path solution [52,Theorem 5.18]; methods for estimating the accuracy of an obtained solution [37,8,9]; and quantifications of the trade-off between searching and sampling [51], have all been thoroughly studied. SAA is usually not implemented in the vanilla form P n due to known issues relating to an appropriate choice of the sample size n. There have been recent advances [44,24,8,9,10] aimed at defeating the issue of sample size choice.…”
Section: Useful Resultsmentioning
confidence: 99%
“…A key practical question in any simulation paradigm is, when should sampling stop? Sequential stopping rules that check whether a desired criteria has been satisfied have been useful in answering this question in steady-state simulations (Dong and Glynn, 2019;Glynn and Whitt, 1992), stochastic programming (Bayraksan and Pierre-Louis, 2012), general Monte Carlo (Frey, 2010;Vats et al, 2021), and MCMC (Flegal and Gong, 2015;Vats et al, 2019).…”
Section: Using Ess To Stop Simulationmentioning
confidence: 99%