2005
DOI: 10.1007/11527695_25
|View full text |Cite
|
Sign up to set email alerts
|

Fifty-Five Solvers in Vancouver: The SAT 2004 Competition

Abstract: Abstract. For the third consecutive year, a SAT competition was organized as a joint event with the SAT conference. With 55 solvers from 25 author groups, the competition was a clear success. One of the noticeable facts from the 2004 competition is the superiority of incomplete solvers on satisfiable random k-SAT benchmarks. It can also be pointed out that the complete solvers awarded this year, namely Zchaff, jerusat1.3, satzoo-1.02, kncfsand march-eq, participated in the SAT 2003 competition (or at least for… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
19
0

Year Published

2005
2005
2010
2010

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 31 publications
(19 citation statements)
references
References 22 publications
(16 reference statements)
0
19
0
Order By: Relevance
“…Minisat was enhanced by a restart strategy that was found to be optimal for this solver in [2]. We used eight publicly available benchmark families: sat04-ind-goldberg03-hard eq check [6] (henceforth, abbreviated to ug), sat04-ind-maris03-gripper [6] (mm), sat04-ind-velevvliw unsat 2.0 [9] (uv2), SAT-Race TS 1 [10] For each solver, we compared the following four versions, applying: (1) no shrinking; (2) the base version of shrinking, corresponding to Eureka's version of shrinking (recall from Section 2 that Eureka's shrinking algorithm is largely similar to Chaff's: its shrinking condition is based on clause length and the sorting scheme picks variables in descending order of decision levels); (3) the base version, modified by applying activity ordering; (4) the base version, modified by using the decision-level-based shrinking condition. Table 1 provides some statistics regarding the benchmark families as well as Eureka's results.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Minisat was enhanced by a restart strategy that was found to be optimal for this solver in [2]. We used eight publicly available benchmark families: sat04-ind-goldberg03-hard eq check [6] (henceforth, abbreviated to ug), sat04-ind-maris03-gripper [6] (mm), sat04-ind-velevvliw unsat 2.0 [9] (uv2), SAT-Race TS 1 [10] For each solver, we compared the following four versions, applying: (1) no shrinking; (2) the base version of shrinking, corresponding to Eureka's version of shrinking (recall from Section 2 that Eureka's shrinking algorithm is largely similar to Chaff's: its shrinking condition is based on clause length and the sorting scheme picks variables in descending order of decision levels); (3) the base version, modified by applying activity ordering; (4) the base version, modified by using the decision-level-based shrinking condition. Table 1 provides some statistics regarding the benchmark families as well as Eureka's results.…”
Section: Resultsmentioning
confidence: 99%
“…We concentrate on zchaff rand 's version of shrinking, since it was shown to be more useful in [5], and also performed better in the SAT'04 competition [6]. Suppose Chaff encounters a conflict.…”
Section: Algorithmic Details and New Heuristicsmentioning
confidence: 99%
“…For the experimental results the three algorithms were implemented on the basis of the PB solver MINISAT+ [13] or SAT solver MINISAT [14], respectively, which participated very successful in the past SAT Competition [18] and PB Evaluation [19]. Through using the same PB solver as the basis for all three algorithms, we have the chance for a fair comparison on the basis of runtime.…”
Section: Resultsmentioning
confidence: 99%
“…This is done by removing that domain and adding new domains whereas an improvement in at least one dimension has to be achieved (line [12][13][14][15]. Concluding, the set D is cleaned up in which sub-domains of other domains are removed (line [17][18][19].…”
Section: Algorithmmentioning
confidence: 99%
“…Most work in the field has focused just on performance-oriented quality metrics for solvers. For example, the basic measure used in both the most recent (at the time of writing) SAT Competition and SMT Competition was simply the pair of the number of benchmarks solved and running time to solve them, compared in the natural lexicographic order (for the competitions mentioned, see, e.g., [1,3]). While the SAT competition has also experimented recently with more complex measures, they are also centered on performance.…”
mentioning
confidence: 99%