2020
DOI: 10.1609/aaai.v34i05.6441
|View full text |Cite
|
Sign up to set email alerts
|

Select, Answer and Explain: Interpretable Multi-Hop Reading Comprehension over Multiple Documents

Abstract: Interpretable multi-hop reading comprehension (RC) over multiple documents is a challenging problem because it demands reasoning over multiple information sources and explaining the answer prediction by providing supporting evidences. In this paper, we propose an effective and interpretable Select, Answer and Explain (SAE) system to solve the multi-document RC problem. Our system first filters out answer-unrelated documents and thus reduce the amount of distraction information. This is achieved by a document c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
108
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 108 publications
(110 citation statements)
references
References 13 publications
1
108
0
Order By: Relevance
“…More recently, SAE (Tu et al, 2020) defines three types of edge in the sentence graph based on the named entities and noun phrases appearing in the question and sentences. C2F Reader (Shao et al, 2020) uses graph attention or self-attention on entity graph, and argues that this graph may not be necessary for multi-hop reasoning.…”
Section: Related Workmentioning
confidence: 99%
“…More recently, SAE (Tu et al, 2020) defines three types of edge in the sentence graph based on the named entities and noun phrases appearing in the question and sentences. C2F Reader (Shao et al, 2020) uses graph attention or self-attention on entity graph, and argues that this graph may not be necessary for multi-hop reasoning.…”
Section: Related Workmentioning
confidence: 99%
“…Xiao et al (2019) propose a Dynamically Fused Graph Networks (DFGN) model that first creates an entity graph from paragraphs, dynamically extracts sub-graphs, and fuses them with paragraph representations. The Select, Answer, Explain (SAE) model (Tu et al, 2020) also first selects relevant documents and uses them to produce answers and explanations. However, it relies on a self-attention over all document representations to capture potential interactions.…”
Section: Related Workmentioning
confidence: 99%
“…This is a challenging reasoning task that requires QA systems to identify relevant pieces of information in the given text and learn to compose them to answer a question. To enable progress in this area, many datasets (Welbl et al, 2018;Talmor and Berant, 2018;Yang et al, 2018;Khot et al, 2020) and models (Min et al, 2019b;Xiao et al, 2019;Tu et al, 2020) focuses on HotpotQA (Yang et al, 2018), which contains 105,257 multi-hop questions derived from two Wikipedia paragraphs, where the correct answer is a span in these paragraphs or yes/no. Due to the multi-hop nature of this dataset, it is natural to assume that the relevance of a sentence for a question would depend on the other sentences considered to be relevant.…”
Section: Introductionmentioning
confidence: 99%
“…Multi-hop Reasoning: Many multifact reasoning approaches have been proposed for HotpotQA and similar datasets (Mihaylov et al, 2018;Khot et al, 2020). These use iterative fact selection (Nishida et al, 2019;Tu et al, 2020;Asai et al, 2020;Das et al, 2019), graph neural networks (Xiao et al, 2019;Fang et al, 2020;Tu et al, 2020), or simply cross-document self-attention (Yang et al, 2019;Beltagy et al, 2020) to capture inter-paragraph interaction. While these approaches have pushed the state of the art, the extent of actual progress on multifact reasoning remains unclear.…”
Section: Reducing Disconnected Reasoningmentioning
confidence: 99%