2010
DOI: 10.1007/978-3-642-14186-7_20
|View full text |Cite
|
Sign up to set email alerts
|

The Seventh QBF Solvers Evaluation (QBFEVAL’10)

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
25
0

Year Published

2011
2011
2016
2016

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 22 publications
(25 citation statements)
references
References 14 publications
0
25
0
Order By: Relevance
“…Additionally, unit clauses learnt by PicoSAT are propagated using QBF-specific QBCP rules within QxBF. Table 1 compares the impact of different FL approaches on the performance of QBF solvers based on search (DepQBF [17] and QuBE7.1 [10]) and variable elimination (Quantor [2], squolem [15] and Nenofex [16]) using all benchmarks from QBFEVAL'10 [20]. For QuBE7.1 internal preprocessing was disabled (QuBE7.1-np).…”
Section: Methodsmentioning
confidence: 99%
“…Additionally, unit clauses learnt by PicoSAT are propagated using QBF-specific QBCP rules within QxBF. Table 1 compares the impact of different FL approaches on the performance of QBF solvers based on search (DepQBF [17] and QuBE7.1 [10]) and variable elimination (Quantor [2], squolem [15] and Nenofex [16]) using all benchmarks from QBFEVAL'10 [20]. For QuBE7.1 internal preprocessing was disabled (QuBE7.1-np).…”
Section: Methodsmentioning
confidence: 99%
“…In the proof of the following theorem, we actually show the upper bound by reduction to ∀∃-QBF instead of by an alternating procedure. This paves the way to using highly optimised QBF solvers [33] for deciding T ≡ Σ ∅.…”
Section: Concept Signatures and The Empty Tboxmentioning
confidence: 99%
“…SPLAnE generated the random traceability relation between feature model and component model to generate a complete SPL model. Further, to increase the complexity of experiments, each SPL model is generated using 10 different topologies and 10 different level of cross-tree constraints with percentage as {5, 10,15,20,25,30,35,40,45, 50}, resulting in a total of 100 SPL models per SPLOT model. So from 698 SPLOT models, we got 69800 SPL models.…”
Section: Experiments 1: Validating Splane With Feature Models From Thementioning
confidence: 99%
“…For each SPL model, 10 different topologies were generated to avoid the threats to internal validity. Further, to increase the complexity of experiments, 10 different levels of cross-tree constraints {5, 10,15,20,25,30,35,40,45, 50} were added. Each randomly generated SPL models consists of a feature model, a component model and a traceability relation.…”
Section: Experiments 2: Validating Splane With Randomly Generated Largmentioning
confidence: 99%