Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) 2020
DOI: 10.18653/v1/2020.emnlp-main.9
|View full text |Cite
|
Sign up to set email alerts
|

PRover: Proof Generation for Interpretable Reasoning over Rules

Abstract: Recent work by Clark et al. (2020) shows that transformers can act as "soft theorem provers" by answering questions over explicitly provided knowledge in natural language. In our work, we take a step closer to emulating formal theorem provers, by proposing PROVER, an interpretable transformer-based model that jointly answers binary questions over rule-bases and generates the corresponding proofs. Our model learns to predict nodes and edges corresponding to proof graphs in an efficient constrained training para… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
91
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 49 publications
(91 citation statements)
references
References 29 publications
0
91
0
Order By: Relevance
“…The second baseline is PROVER (Saha et al, 2020), which handles the reasoning problem as a graph problem. This approach takes the input C and Q to produce both the final answer {T rue, F alse}, and a graph that indicates the reasoning path.…”
Section: Baseline Modelsmentioning
confidence: 99%
“…The second baseline is PROVER (Saha et al, 2020), which handles the reasoning problem as a graph problem. This approach takes the input C and Q to produce both the final answer {T rue, F alse}, and a graph that indicates the reasoning path.…”
Section: Baseline Modelsmentioning
confidence: 99%
“…PROVER (Saha et al, 2020) builds on top of RoBERTa (Liu et al, 2019) and consists of a question answering (QA) module, a node module and an edge module where the node and edge modules are used to predict a single proof graph. The input to RoBERTa is the concatenation of the facts, rules and the question.…”
Section: Baseline Prover Modelmentioning
confidence: 99%
“…Following PROVER, we generate valid proofs during inference using an ILP, subject to multiple global constraints (see Saha et al (2020)). For each predicted proof, the predicted nodes and edge probabilities from MULTIPROVER, we obtain the corresponding predicted edges using Eqn.…”
Section: Integer Linear Program (Ilp) Inferencementioning
confidence: 99%
See 2 more Smart Citations