Determining whether a solution is of high quality (optimal or near optimal) is a fundamental question in optimization theory and algorithms. In this paper, we develop Monte Carlo samplingbased procedures for assessing solution quality in stochastic programs. Quality is defined via the optimality gap and our procedures' output is a confidence interval on this gap. We review a multiple-replications procedure that requires solution of, say, 30 optimization problems and then, we present a result that justifies a computationally simplified single-replication procedure that only requires solving one optimization problem. Even though the single replication procedure is computationally significantly less demanding, the resulting confidence interval might have low coverage probability for small sample sizes for some problems. We provide variants of this procedure that require two replications instead of one and that perform better empirically. We present computational results for a newsvendor problem and for two-stage stochastic linear programs from the literature. We also discuss when the procedures perform well and when they fail and provide preliminary guidelines for selecting a candidate solution.
We develop a sequential sampling procedure for a class of stochastic programs. We assume that a sequence of feasible solutions with an optimal limit point is given as input to our procedure. Such a sequence can be generated by solving a series of sampling problems with increasing sample size, or it can be found by any other viable method. Our procedure estimates the optimality gap of a candidate solution from this sequence. If the point estimate of the optimality gap is sufficiently small according to our termination criterion, then we stop. Otherwise, we repeat with the next candidate solution from the sequence under an increased sample size. We provide conditions under which this procedure (i) terminates with probability one and (ii) terminates with a solution that has a small optimality gap with a prespecified probability.
Determining if a solution is optimal or near optimal is fundamental in optimization theory, algorithms, and computation. For instance, Karush-Kuhn-Tucker conditions provide necessary and sufficient optimality conditions for certain classes of problems, and bounds on optimality gaps are frequently used as part of optimization algorithms. Such bounds are obtained through Lagrangian, integrality, or semidefinite programming relaxations. An alternative approach in stochastic programming is to use Monte Carlo sampling-based estimators on the optimality gap. In this tutorial, we present a simple, easily implemented procedure that forms a point and interval estimator on the optimality gap of a given candidate solution. We then discuss methods to reduce the computational effort, bias, and variance of our simplest estimator. We also provide a framework that allows the use these optimality gap estimators in an algorithmic way by providing rules to iteratively increase the sample sizes and to terminate. This scheme can be used as a stand-alone sequential sampling procedure, or it can be used in conjunction with a variety of sampling-based algorithms to obtain a solution to a stochastic program with a priori control on the quality of that solution.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.