Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) 2020
DOI: 10.18653/v1/2020.emnlp-main.710
|View full text |Cite
|
Sign up to set email alerts
|

Hierarchical Graph Network for Multi-hop Question Answering

Abstract: In this paper, we present Hierarchical Graph Network (HGN) for multi-hop question answering. To aggregate clues from scattered texts across multiple paragraphs, a hierarchical graph is created by constructing nodes on different levels of granularity (questions, paragraphs, sentences, entities), the representations of which are initialized with pre-trained contextual encoders. Given this hierarchical graph, the initial node representations are updated through graph propagation, and multihop reasoning is perform… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
107
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 100 publications
(108 citation statements)
references
References 39 publications
1
107
0
Order By: Relevance
“…Additionally, it relies on a Graph Neural Network (GNN) to answer the questions. Hierarchical Graph Network (HGN) model (Fang et al, 2020) builds a hierarchical graph with three levels: entities, sentences and paragraphs to allow for joint reasoning. De-compRC (Min et al, 2019b) takes a completely different approach of learning to decompose the question (using additional annotations) and then answer the decomposed questions using a standard single-hop RC system.…”
Section: Related Workmentioning
confidence: 99%
“…Additionally, it relies on a Graph Neural Network (GNN) to answer the questions. Hierarchical Graph Network (HGN) model (Fang et al, 2020) builds a hierarchical graph with three levels: entities, sentences and paragraphs to allow for joint reasoning. De-compRC (Min et al, 2019b) takes a completely different approach of learning to decompose the question (using additional annotations) and then answer the decomposed questions using a standard single-hop RC system.…”
Section: Related Workmentioning
confidence: 99%
“…We achieved a test F1 of 79.34 and Exact Match (EM) of 66.33. Our approach is competitive with state-of-the-art systems SAE (Tu et al, 2020) and HGN (Fang et al, 2019), which both (unlike us) learn from strong, supporting-fact supervision about which sentences are relevant to the question.…”
Section: Results On Question Answeringmentioning
confidence: 98%
“…First, QA models that use decompositions outperform a strong RoBERTa baseline Min et al, 2019a) by 3.1 points in F1 on the original dev set, 10 points on the out-of-domain dev set from Min et al (2019b), and 11 points on the multi-hop dev set from Jiang and Bansal (2019a). Our method is competitive with state-of-the-art methods SAE (Tu et al, 2020) and HGN (Fang et al, 2019) that use additional, strong supervision on which sentences are relevant to the question. Second, our analysis shows that sub-questions improve multi-hop QA by using the single-hop QA model to retrieve question-relevant text.…”
Section: Seq2seq Ormentioning
confidence: 89%
“…Datasets D and T(D): HotpotQA is a popular multi-hop QA dataset with about 113K questions which has spurred many models (Nishida et al, 2019;Xiao et al, 2019;Tu et al, 2020;Fang et al, 2020). We use the distractor setting where each question has a set of 10 input paragraphs, of which two were used to create the multifact question.…”
Section: Methodsmentioning
confidence: 99%
“…Multi-hop Reasoning: Many multifact reasoning approaches have been proposed for HotpotQA and similar datasets (Mihaylov et al, 2018;Khot et al, 2020). These use iterative fact selection (Nishida et al, 2019;Tu et al, 2020;Asai et al, 2020;Das et al, 2019), graph neural networks (Xiao et al, 2019;Fang et al, 2020;Tu et al, 2020), or simply cross-document self-attention (Yang et al, 2019;Beltagy et al, 2020) to capture inter-paragraph interaction. While these approaches have pushed the state of the art, the extent of actual progress on multifact reasoning remains unclear.…”
Section: Reducing Disconnected Reasoningmentioning
confidence: 99%