2004
DOI: 10.1007/978-3-540-30549-1_28
|View full text |Cite
|
Sign up to set email alerts
|

A Learning-Based Algorithm Selection Meta-Reasoner for the Real-Time MPE Problem

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
12
0

Year Published

2012
2012
2023
2023

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(12 citation statements)
references
References 11 publications
0
12
0
Order By: Relevance
“…Specifically, features 15-17 are based on the tour length obtained by LK; features 18-20, 21-23, and 24-26 are based on the tour length of local minima, the tour quality improvement per search step, and the number of search steps to reach a local minimum, respectively; features 27-29 measure the Hamming distance between two local minima; and features 30-32 describe the probability of edges appearing in any local minimum encountered during probing. Our branch and cut probing features(33)(34)(35)(36)(37)(38)(39)(40)(41)(42)(43) are based on 2-second runs of Concorde. Specifically, features 33-35 measure the improvement of lower bound per cut; feature 36 is the ratio of upper and lower bound at the end of the probing run; and features 37-43 analyze the final LP solution.…”
mentioning
confidence: 99%
“…Specifically, features 15-17 are based on the tour length obtained by LK; features 18-20, 21-23, and 24-26 are based on the tour length of local minima, the tour quality improvement per search step, and the number of search steps to reach a local minimum, respectively; features 27-29 measure the Hamming distance between two local minima; and features 30-32 describe the probability of edges appearing in any local minimum encountered during probing. Our branch and cut probing features(33)(34)(35)(36)(37)(38)(39)(40)(41)(42)(43) are based on 2-second runs of Concorde. Specifically, features 33-35 measure the improvement of lower bound per cut; feature 36 is the ratio of upper and lower bound at the end of the probing run; and features 37-43 analyze the final LP solution.…”
mentioning
confidence: 99%
“…Langley (1983a), Epstein et al (2002), Nareyek (2001) learn weights for decision rules to guide the selector towards the best algorithms. Cook and Varnell (1997), Guerri and Milano (2004), Guo and Hsu (2004), Roberts and Howe (2006), Bhowmick, Eijkhout, Freund, Fuentes, and Keyes (2006), Gent et al (2010a) go one step further and learn decision trees. Guo and Hsu (2004) again note that the reason for choosing decision trees was not primarily the performance, but the understandability of the result.…”
Section: Per-portfolio Modelsmentioning
confidence: 99%
“…Cook and Varnell (1997), Guerri and Milano (2004), Guo and Hsu (2004), Roberts and Howe (2006), Bhowmick, Eijkhout, Freund, Fuentes, and Keyes (2006), Gent et al (2010a) go one step further and learn decision trees. Guo and Hsu (2004) again note that the reason for choosing decision trees was not primarily the performance, but the understandability of the result. Pfahringer, Bensusan, and Giraud-Carrier (2000) show the set of learned rules in the paper to illustrate its compactness.…”
Section: Per-portfolio Modelsmentioning
confidence: 99%
See 1 more Smart Citation
“…A Bayesian approach is proposed in (Guo, 2004) to construct an algorithm selection system which is applied to the Sorting and Most Probable Explanation (MPE) problems. From a set of training instances, their features and the run time of the best algorithm that solves each instance are utilized to build the Bayesian network.…”
Section: Machine Learningmentioning
confidence: 99%