DOI: 10.29007/nb2g
|View full text |Cite
|
Sign up to set email alerts
|

Learning from Multiple Proofs: First Experiments

Abstract: Mathematical textbooks typically present only one proof for most of the theorems. However, there are infinitely many proofs for each theorem in first-order logic, and mathematicians are often aware of (and even invent new) important alternative proofs and use such knowledge for (lateral) thinking about new problems.In this paper we start exploring how the explicit knowledge of multiple (human and ATP) proofs of the same theorem can be used in learning-based premise selection algorithms in large-theory mathemat… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
14
0

Publication Types

Select...
4
2

Relationship

6
0

Authors

Journals

citations
Cited by 10 publications
(14 citation statements)
references
References 6 publications
0
14
0
Order By: Relevance
“…Additional dependency data are obtained by running ATPs on the ATP problems created from the HOL Light proof dependencies, i.e., the ATPs are run in the re-proving mode. Such data are often smaller and preferable [48]. These data are again exported using the same format as in (1).…”
Section: Total 855mentioning
confidence: 99%
See 2 more Smart Citations
“…Additional dependency data are obtained by running ATPs on the ATP problems created from the HOL Light proof dependencies, i.e., the ATPs are run in the re-proving mode. Such data are often smaller and preferable [48]. These data are again exported using the same format as in (1).…”
Section: Total 855mentioning
confidence: 99%
“…No "software engineering" or other approach can prevent new shortcuts to be found in mathematics, unless an exhaustive (and infeasible) proof minimization is applied. 48 An interesting case is McAllester's Ontic [52]. The whole library is searched automatically, but the automation is fast and intentionally incomplete.…”
Section: Related Work and Contributionsmentioning
confidence: 99%
See 1 more Smart Citation
“…They tend to involve more, different facts than Sledgehammer proofs. Sometimes they rely on induction, which is beyond the scope of first-order provers; but even excluding induction, there is evidence that the provers work better if the proofs used for learning were produced by similar provers [18,38]. A special mode of Sledgehammer runs an automatic prover on all available facts to learn from machine-generated proofs.…”
Section: Learning From and For Isabellementioning
confidence: 99%
“…It is also worth mentioning that even in the ITP setting, ATP proofs are typically a valuable source of training data for learning premise selection [11,6]. This means that the ATP and ITP lemma extraction could likely be fruitfully combined in the various strong [AI]TP "hammer" systems.…”
Section: Related Workmentioning
confidence: 99%