2010
DOI: 10.1109/tevc.2009.2033579
|View full text |Cite
|
Sign up to set email alerts
|

Learning the Large-Scale Structure of the MAX-SAT Landscape Using Populations

Abstract: Abstract-A new algorithm for solving MAX-SAT problems is introduced which clusters good solutions, and restarts the search from the closest feasible solution to the centroid of each cluster. This is shown to be highly efficient for finding good solutions of large MAX-SAT problems. We argue that this success is due to the population learning the large-scale structure of the fitness landscape. Systematic studies of the landscape are presented to support this hypothesis. In addition, a number of other strategies … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
24
0

Year Published

2010
2010
2023
2023

Publication Types

Select...
4
2
1

Relationship

2
5

Authors

Journals

citations
Cited by 20 publications
(24 citation statements)
references
References 25 publications
(27 reference statements)
0
24
0
Order By: Relevance
“…More details can be found in [1]. We were unable to compare our algorithm with most other algorithms that appear in the literature since the other studied were performed on much smaller instances (typically around 100 variables).…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…More details can be found in [1]. We were unable to compare our algorithm with most other algorithms that appear in the literature since the other studied were performed on much smaller instances (typically around 100 variables).…”
Section: Resultsmentioning
confidence: 99%
“…However, there are few examples of EAs on real world problems where the algorithm unambiguously exploits this global knowledge of the landscape. Recently we proposed an algorithm for solving large MAX-SAT problems based on clustering good solutions which we argued does precisely this [1]. We review that work here and extend the idea to a second classic NP-Hard problem, Vertex Cover.…”
Section: Introductionmentioning
confidence: 92%
“…al [15]. In both of these cases, local search must first be run multiple times with a uniform random initialization to construct a set of local optima.…”
Section: Estimating Backbone Variablesmentioning
confidence: 99%
“…The frequency of assignments for each bit found in the set of local optima are then used to initialize subsequent runs of local search. It is hypothesized that these frequencies can provide a good estimation of the backbone, a subset of variables that are consistently assigned either true or false across all global optima [26,15].…”
Section: Estimating Backbone Variablesmentioning
confidence: 99%
“…That is, the high-fitness regions of the search space has to lie directly in the centre of a larger region containing good quality solutions. This very particular landscape might appear slightly artificial and unlikely to occur in real world problems, however, in a recent paper this strategy was found to be very effective on one of the classic combinatorial optimisation problems, namely MAX-3-SAT [24].…”
Section: A Strong Focusingmentioning
confidence: 99%