2008 Winter Simulation Conference 2008
DOI: 10.1109/wsc.2008.4736082
|View full text |Cite
|
Sign up to set email alerts
|

The knowledge-gradient stopping rule for ranking and selection

Abstract: We consider the ranking and selection of normal means in a fully sequential Bayesian context. By considering the sampling and stopping problems jointly rather than separately, we derive a new composite stopping/sampling rule. The sampling component of the derived composite rule is the same as the previously introduced LL1 sampling rule, but the stopping rule is new. This new stopping rule significantly improves the performance of LL1 as compared to its performance under the best other generally known adaptive … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
5
0

Year Published

2009
2009
2020
2020

Publication Types

Select...
4
2
2

Relationship

1
7

Authors

Journals

citations
Cited by 19 publications
(5 citation statements)
references
References 16 publications
0
5
0
Order By: Relevance
“…Usually there is a limited budget for evaluating the alternatives and when the budget is exhausted, the agent has to choose the alternative that appears to be the best, according to the obtained knowledge (Swisher, Jacobson, & Yücesan, 2003). Frazier and Powell (2008) present a version of this problem where the prior information units and the new obtained information units about each alternative are sampled from a specific distribution with unknown mean and variance. The model provides a new heuristic sampling and stopping rule that relies on the distribution of the samples.…”
Section: Exploration-exploitation Problemsmentioning
confidence: 99%
“…Usually there is a limited budget for evaluating the alternatives and when the budget is exhausted, the agent has to choose the alternative that appears to be the best, according to the obtained knowledge (Swisher, Jacobson, & Yücesan, 2003). Frazier and Powell (2008) present a version of this problem where the prior information units and the new obtained information units about each alternative are sampled from a specific distribution with unknown mean and variance. The model provides a new heuristic sampling and stopping rule that relies on the distribution of the samples.…”
Section: Exploration-exploitation Problemsmentioning
confidence: 99%
“…Project members Warren Powell and Ilya Rhyzov have explored this problem of information collection on a graph and have found that they can adapt an existing knowledge gradient method [FrPo08] to this new problem. In computational testing they have found that for networks where the paths are not too short, the knowledge gradient works consistently quite well relative to competing techniques [RP09].…”
Section: Data Sampling Strategies For Sensor Datamentioning
confidence: 99%
“…The work [4] provides an empirical study of several stopping rules. The work [19] combines the search for the next design point, at which the high-fidelity model is evaluated, with the number of iterations to perform, and solves the corresponding problem heuristically. The authors of [19] show that their stopping rule can stop too soon but that the incurred loss of stopping too soon is low.…”
mentioning
confidence: 99%
“…The work [19] combines the search for the next design point, at which the high-fidelity model is evaluated, with the number of iterations to perform, and solves the corresponding problem heuristically. The authors of [19] show that their stopping rule can stop too soon but that the incurred loss of stopping too soon is low. While a major challenge in stopping rules for Bayesian optimization is to forecast how much is gained by another adaptation, we will rely on rates that describe the error and costs behavior of our low-fidelity models to trade off costs and benefit.…”
mentioning
confidence: 99%