Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Langua 2021
DOI: 10.18653/v1/2021.naacl-main.97
|View full text |Cite
|
Sign up to set email alerts
|

Explainable Multi-hop Verbal Reasoning Through Internal Monologue

Abstract: Many state-of-the-art (SOTA) language models have achieved high accuracy on several multi-hop reasoning problems.However, these approaches tend to not be interpretable because they do not make the intermediate reasoning steps explicit. Moreover, models trained on simpler tasks tend to fail when directly tested on more complex problems. We propose the Explainable multi-hop Verbal Reasoner (EVR) to solve these limitations by (a) decomposing multi-hop reasoning problems into several simple ones, and (b) using nat… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
13
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(13 citation statements)
references
References 19 publications
0
13
0
Order By: Relevance
“…There has been also an increasing interest in solving proof generation iteratively. EVR (Liang et al, 2021) splits the question into sub-questions, using generated intermediate texts to guide proof generation step by step. ProofWriter (Tafjord et al, 2021) shares a similar idea but uses intermediate textual conclusions instead and a more powerful T5-11B model (Raffel et al, 2020) for generation, which makes it hard to reproduce.…”
Section: Related Workmentioning
confidence: 99%
See 4 more Smart Citations
“…There has been also an increasing interest in solving proof generation iteratively. EVR (Liang et al, 2021) splits the question into sub-questions, using generated intermediate texts to guide proof generation step by step. ProofWriter (Tafjord et al, 2021) shares a similar idea but uses intermediate textual conclusions instead and a more powerful T5-11B model (Raffel et al, 2020) for generation, which makes it hard to reproduce.…”
Section: Related Workmentioning
confidence: 99%
“…prediction modules once at first to predict the answer A and the strategy of the proof (refer to §3.1, where the latter one will result in different proof generation procedures. In order to improve the reasoning efficiency as well as accuracy, instead of using generated intermediate texts (Liang et al, 2021;Tafjord et al, 2021), all possible nodes (rules and facts) are represented by node embeddings in IBR. The initial state of the proof is only the representation of the question h Q , then the rest of the reasoning path will be constructed based on it.…”
Section: Overviewmentioning
confidence: 99%
See 3 more Smart Citations