DOI: 10.29007/8mwc
|View full text |Cite
|
Sign up to set email alerts
|

Deep Network Guided Proof Search

Abstract: Deep learning techniques lie at the heart of several significant AI advances in recent years including object recognition and detection, image captioning, machine translation, speech recognition and synthesis, and playing the game of Go.Automated first-order theorem provers can aid in the formalization and verification of mathematical theorems and play a crucial role in program analysis, theory reasoning, security, interpolation, and system verification.Here we suggest deep learning based guidance in the proof… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
98
0
1

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 66 publications
(99 citation statements)
references
References 37 publications
(57 reference statements)
0
98
0
1
Order By: Relevance
“…However, without any context this may be hard to decide. As in ENIGMA and [26], we introduce more context into our setting by using the problem's conjecture. The negated conjecture is translated by E into a set of clauses.…”
Section: Neural Architecture For Atp Guidancementioning
confidence: 99%
See 1 more Smart Citation
“…However, without any context this may be hard to decide. As in ENIGMA and [26], we introduce more context into our setting by using the problem's conjecture. The negated conjecture is translated by E into a set of clauses.…”
Section: Neural Architecture For Atp Guidancementioning
confidence: 99%
“…More recently, machine learning has also started to be used to guide the internal search of the ATP systems. In sophisticated saturation-style provers this has been done by feedback loops for strategy invention [38,16,33] and by using supervised learning [14,26] to select the next given clause [27]. In the simpler connection tableau systems such as leancop [29], supervised learning has been used to choose ⋆ Supported by the ERC Consolidator grant no.…”
mentioning
confidence: 99%
“…The domain-specific Sine [15] method is a prominent example, and selects axioms that share infrequently-appearing symbols with the goal. ATP systems rely on clause selection for driving inferences, and a recent use of machine learning for clause selection [27] was integrated in the E theorem prover [35]. In SAT, CDCL solvers select clauses using unit propagation and conflict analysis, and rely on garbage collection of redundant clauses to balance available inferences from memory and propagation overhead.…”
Section: Related Workmentioning
confidence: 99%
“…Mathematical formalisms take this to an extreme, since they simplify and tightly constrain the ways in which their users can write things down. We note that machine learning methods have been applied to corpora of formalised mathematics to good effect [30,31,33,44]. However, mathematics as it is done by most mathematicians has a very different structure -and there is a lot more of it available, as summarised in The Stack Exchange corpus alone is much larger than the 160000 games in AlphaGo's initial training set [34], but the data looks completely different.…”
Section: Introductionmentioning
confidence: 99%