2011
DOI: 10.1007/s10462-011-9290-2
|View full text |Cite
|
Sign up to set email alerts
|

Simple algorithm portfolio for SAT

Abstract: The importance of algorithm portfolio techniques for SAT has long been noted, and a number of very successful systems have been devised, including the most successful one -SATzilla. However, all these systems are quite complex (to understand, reimplement, or modify). In this paper we present an algorithm portfolio for SAT that is extremely simple, but in the same time so efficient that it outperforms SATzilla. For a new SAT instance to be solved, our portfolio finds its k-nearest neighbors from the training se… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
19
0

Year Published

2014
2014
2022
2022

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 21 publications
(19 citation statements)
references
References 14 publications
0
19
0
Order By: Relevance
“…MedleySolver differs from SATZilla in that it targets SMT, learns solver orders, distributes time among solvers, and does not require training. ArgoSmArT k-NN [32] applies a pre-trained k-Nearest-Neighbor algorithm to portfolio SAT solving. Given a query, they deploy the most successful solver on the k nearest neighbors.…”
Section: Related Workmentioning
confidence: 99%
“…MedleySolver differs from SATZilla in that it targets SMT, learns solver orders, distributes time among solvers, and does not require training. ArgoSmArT k-NN [32] applies a pre-trained k-Nearest-Neighbor algorithm to portfolio SAT solving. Given a query, they deploy the most successful solver on the k nearest neighbors.…”
Section: Related Workmentioning
confidence: 99%
“…Task-independent meta-learning (Abdulrahman et al 2018), which simply identifies the globally best model on historical tasks, applies to the unsupervised setting as well as OD. This can be refined by identifying the best model on similar tasks, where task similarity is measured in the metafeature space via clustering (Kadioglu et al 2010) or nearest neighbors (Nikolić, Marić, and Janičić 2013).…”
Section: Model Selection Automl Meta-learningmentioning
confidence: 99%
“…The natural next step is to construct a method that selects the heuristic or set of heuristics to use for a given instance based on that instance's characteristics, termed an offline learning heuristic selection hyper-heuristic (Burke et al 2010) or an algorithm portfolio. This is also referred to in the literature as the algorithm selection problem, and has been used to create solvers for satisfiability testing (Xu et al 2008, Nikolić et al 2013, quantified Boolean formulas (Pulina and Tacchella 2009), constraint programming (O'Mahony et al 2008), and planning problems (Howe et al 2000). Unlike many hyper-heuristics found in the literature, our procedure does not change the heuristic used during the solution process, nor does it construct a heuristic by selecting from a set of heuristic components.…”
Section: Algorithm Portfoliomentioning
confidence: 99%