2019
DOI: 10.1007/978-3-030-29026-9_21
|View full text |Cite
|
Sign up to set email alerts
|

ENIGMAWatch: ProofWatch Meets ENIGMA

Abstract: In this work we describe a new learning-based proof guidance -ENIGMAWatch -for saturation-style first-order theorem provers. ENIGMAWatch combines two guiding approaches for the given-clause selection implemented for the E ATP system: ProofWatch and ENIGMA. ProofWatch is motivated by the watchlist (hints) method and based on symbolic matching of multiple related proofs, while ENIGMA is based on statistical machine learning. The two methods are combined by using the evolving information about symbolic proof matc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2

Relationship

4
3

Authors

Journals

citations
Cited by 9 publications
(6 citation statements)
references
References 28 publications
0
6
0
Order By: Relevance
“…In the former the matching (actually, clause classification) is approximate, weighted and learned, while with hints the clause matching/classification is crisp, logic-rooted and preprogrammed, sometimes running into the NP hardness issues. Our latest comparison [12] done over the Mizar/MPTP corpus in the symbol-based setting showed better performance of ENIGMA over using hints, most likely due to better generalization behavior of ENIGMA based on the statistical (GBDT) learning.…”
Section: B Discussion Of Anonymizationmentioning
confidence: 92%
“…In the former the matching (actually, clause classification) is approximate, weighted and learned, while with hints the clause matching/classification is crisp, logic-rooted and preprogrammed, sometimes running into the NP hardness issues. Our latest comparison [12] done over the Mizar/MPTP corpus in the symbol-based setting showed better performance of ENIGMA over using hints, most likely due to better generalization behavior of ENIGMA based on the statistical (GBDT) learning.…”
Section: B Discussion Of Anonymizationmentioning
confidence: 92%
“…The selection of the "right" given clause is known to be vital for the success of the proof search. The ENIGMA system [3,7,[10][11][12]14] applies various machine learning methods for given clause selection, learning from a large number of previous successful proof searches. The training data consists of clauses processed during a proof search, labeling the clauses that appear in the discovered proof as positive, and the other (thus unnecessary) processed clauses as negative.…”
Section: Contributionsmentioning
confidence: 99%
“…The first ENIGMA [11] used fast linear classification [4] with hand-crafted clause features based on symbol names, representing clauses by fixed-length numeric vectors. Follow-up versions [3,7,12,14] introduced context-based clause evaluation and fast dimensionality reduction by feature hashing, and employed Gradient Boosting Decision Trees (GBDTs), implemented by the XGBoost and LightGBM systems [2,18]), and Recursive Neural Networks (implemented in PyTorch) as the underlying machine learning methods.…”
Section: Contributionsmentioning
confidence: 99%
“…Several learning-guided ATP systems have been developed that interleave proving with supervised learning from proof searches [4,[10][11][12][13]17,19,23,39]. In the saturation-style setting used by ATP systems like E [31] and Vampire [21], direct learning-based selection of the most promising given clauses leads already to large improvements [14], without other changes to the proof search procedure.…”
Section: Introductionmentioning
confidence: 99%