2019
DOI: 10.1007/978-3-030-29007-8_3
|View full text |Cite
|
Sign up to set email alerts
|

A Neurally-Guided, Parallel Theorem Prover

Abstract: We present a prototype of a neurally-guided automatic theorem prover for first-order logic with equality. The prototype uses a neural network trained on previous proof search attempts to evaluate subgoals based directly on their structure, and hence bias proof search toward success. An existing first-order theorem prover is employed to dispatch easy subgoals and prune branches which cannot be solved. Exploration of the search space is asynchronous with respect to both the evaluation network and the existing pr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
5
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 9 publications
(6 citation statements)
references
References 20 publications
(1 reference statement)
0
5
0
Order By: Relevance
“…This will not only speed up inference, but will also make training more efficient as fewer wrong proofs will be considered. This idea is similar to that of neurally-guided theorem proving (Wang et al, 2017;Rawson and Reger, 2019).…”
Section: Parametric Heuristicsmentioning
confidence: 86%
“…This will not only speed up inference, but will also make training more efficient as fewer wrong proofs will be considered. This idea is similar to that of neurally-guided theorem proving (Wang et al, 2017;Rawson and Reger, 2019).…”
Section: Parametric Heuristicsmentioning
confidence: 86%
“…As an extension of FormulaNet, [40] construct syntax trees of HOL formulas as structural inputs and use message-passing GNNs to learn features of HOL to guide theorem proving by predicting tactics and tactic arguments at every step of the proof. LERNA [41] uses convolutional neural networks (CNNs) [42] to learn previous proof search attempts (logic formulas) represented by graphs to guide the current proof search for ATP. NeuroSAT [43,44] reads SAT queries (logic formulas) as graphs and learns the features using different graph embedding strategies (e.g.…”
Section: Related Workmentioning
confidence: 99%
“…Approaches for first-order logic also differ in tasks they were evaluated on, with some evaluated on offline tasks such as premise selection [47], [59], [62], length prediction [70], and a few in online proof guidance [41], [71], [72], [73]. In online proof guidance, which our work targets, existing work are based on simpler tableaux based reasoners [71], [72], [73]. Unlike these approaches, TRAIL targets guiding efficient, more capable saturation-based theorem provers.…”
Section: Related Workmentioning
confidence: 99%
“…However, approaches that target specific logics such as propositional logic [70], [74] and fragments of first-order logic [75], fail to preserve properties specific to first-order logic. Recent work tried to address this limitation by preserving properties such as invariance to predicate and function argument order [62], [71] and variable quantification [12], [47], [62], [71], [72], [73]. Other approaches specifically target higher-order logics, which have different corresponding graph structures from first-order logic [12], [59].…”
Section: Related Workmentioning
confidence: 99%