2008
DOI: 10.1142/s0218213008004023
|View full text |Cite
|
Sign up to set email alerts
|

Random Subsets Support Learning a Mixture of Heuristics

Abstract: Problem solvers, both human and machine, have at their disposal many heuristics that may support effective search. The efficacy of these heuristics, however, varies with the problem class, and their mutual interactions may not be well understood. The long-term goal of our work is to learn how to select appropriately from among a large body of heuristics, and how to combine them into a mixture that works well on a specific class of problems. The principal result reported here is that randomly chosen subsets of … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
7
0

Year Published

2010
2010
2019
2019

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 13 publications
(7 citation statements)
references
References 16 publications
0
7
0
Order By: Relevance
“…There are several examples in the literature of empirical research concerning subsets of low level heuristics such as Petrovic and Epstein (2008), and Soria-Alcaraz et al (2017), and from a purely theoretical perspective (Lehre and Özcan 2013). For example, in Petrovic and Epstein (2008) subsets of heuristics are randomly selected from a large pool of available heuristics. Each subset is evaluated on a number of benchmark problems, and a learning algorithm is used to determine weights for the elements of the subset.…”
Section: Heuristic Subsequencesmentioning
confidence: 99%
“…There are several examples in the literature of empirical research concerning subsets of low level heuristics such as Petrovic and Epstein (2008), and Soria-Alcaraz et al (2017), and from a purely theoretical perspective (Lehre and Özcan 2013). For example, in Petrovic and Epstein (2008) subsets of heuristics are randomly selected from a large pool of available heuristics. Each subset is evaluated on a number of benchmark problems, and a learning algorithm is used to determine weights for the elements of the subset.…”
Section: Heuristic Subsequencesmentioning
confidence: 99%
“…The most extreme case in this representation would be an algorithm containing an infinite switch-case statement containing all the problems and the best heuristic for each one of them. Hyper-heuristics are closely related to dynamic algorithm portfolios [25,30] in the way they work. Both algorithm portfolios and hyper-heuristics select from existing methods to apply the most suitable one at different stages of the search.…”
Section: Hyper-heuristicsmentioning
confidence: 99%
“…More recent studies on the combination of heuristics for CSPs include the work done by Petrovic and Epstein [30], who studied the idea of combining various heuristics to produce mixtures that work well on some sets of CSP instances. Their approach bases their decisions on random sets of performance-based weighted criteria that variable and value ordering heuristics use to make their decisions.…”
Section: Hyper-heuristicsmentioning
confidence: 99%
“…Also, algorithm portfolios for Constraint programming have been successfully studied before [13]. Petrovic and Eipstein [25] studied the idea of combining various heuristics to produce mixtures that work well on particular classes of instances. More recent studies about the dynamic combination of heuristics applied to CSP include the work done by Terashima-Marín et al [35], who proposed an evolutionary framework to generate hyper-heuristics for variable ordering in CSP and the research developed by Bittle and Fox [5] who presented a hyper-heuristic approach for variable and value ordering for CSP based on a symbolic cognitive architecture augmented with case based reasoning as the machine learning mechanism for their hyper-heuristics.…”
Section: Hyper-heuristicsmentioning
confidence: 99%