2019
DOI: 10.48550/arxiv.1909.07598
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Multi-step Entity-centric Information Retrieval for Multi-Hop Question Answering

Abstract: Multi-hop question answering (QA) requires an information retrieval (IR) system that can find multiple supporting evidence needed to answer the question, making the retrieval process very challenging. This paper introduces an IR technique that uses information of entities present in the initially retrieved evidence to learn to 'hop' to other relevant evidence. In a setting, with more than 5 million Wikipedia paragraphs, our approach leads to significant boost in retrieval performance. The retrieved evidence al… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
3
1

Relationship

2
2

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 13 publications
0
5
0
Order By: Relevance
“…Baselines We compare HGN with both published and unpublished work in both settings. For the Model Ans Sup Joint EM F1 EM F1 EM F1 TPReasoner (Xiong et al, 2019) 36.04 47.43 ----Baseline Model 23.95 32.89 3.86 37.71 1.85 16.15 QFE (Nishida et al, 2019) 28.66 38.06 14.20 44.35 8.69 23.10 MUPPET (Feldman and El-Yaniv, 2019) 30.61 40.26 16.65 47.33 10.85 27.01 Cognitive Graph (Ding et al, 2019) 37.12 48.87 22.82 57.69 12.42 34.92 PR-BERT † 43.33 53.79 21.90 59.63 14.50 39.11 Golden Retriever (Qi et al, 2019) 37.92 48.58 30.69 64.24 18.04 39.13 Entity-centric BERT (Godbole et al, 2019) 41 Distractor setting, we compare with DFGN (Xiao et al, 2019), QFE (Nishida et al, 2019), the official baseline , and DecompRC (Min et al, 2019b). Unpublished work includes TAP2, EPS+BERT, SAE, P-BERT, LQR-net (Anonymous, 2020a), and ChainEx .…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Baselines We compare HGN with both published and unpublished work in both settings. For the Model Ans Sup Joint EM F1 EM F1 EM F1 TPReasoner (Xiong et al, 2019) 36.04 47.43 ----Baseline Model 23.95 32.89 3.86 37.71 1.85 16.15 QFE (Nishida et al, 2019) 28.66 38.06 14.20 44.35 8.69 23.10 MUPPET (Feldman and El-Yaniv, 2019) 30.61 40.26 16.65 47.33 10.85 27.01 Cognitive Graph (Ding et al, 2019) 37.12 48.87 22.82 57.69 12.42 34.92 PR-BERT † 43.33 53.79 21.90 59.63 14.50 39.11 Golden Retriever (Qi et al, 2019) 37.92 48.58 30.69 64.24 18.04 39.13 Entity-centric BERT (Godbole et al, 2019) 41 Distractor setting, we compare with DFGN (Xiao et al, 2019), QFE (Nishida et al, 2019), the official baseline , and DecompRC (Min et al, 2019b). Unpublished work includes TAP2, EPS+BERT, SAE, P-BERT, LQR-net (Anonymous, 2020a), and ChainEx .…”
Section: Methodsmentioning
confidence: 99%
“…For the Fullwiki setting, the published baselines include SemanticRetrievalMRS (Yixin Nie, 2019), Entity-centric BERT (Godbole et al, 2019), GoldEn Retriever (Qi et al, 2019), Cognitive Graph (Ding et al, 2019), MUPPET (Feldman and El-Yaniv, 2019), QFE (Nishida et al, 2019), and the official baseline . Unpublished work includes Graph-based Recurrent Retriever (Anonymous, 2020b), MIR+EPS+BERT, Transformer-XH (Anonymous, 2020c), PR-BERT, and TPReasoner (Xiong et al, 2019).…”
Section: Methodsmentioning
confidence: 99%
“…The original task requires to find evidence passages from abstract paragraphs of all Wikipedia pages to support a multi-hop question. For each q, we collect 50 relevant passages based on bigram BM25 (Godbole et al, 2019). Two positive evidence passages to each question are provided by human annotators as the ground truth.…”
Section: Settingsmentioning
confidence: 99%
“…Lin et al [18] constructed a schema graph between QA-concept pairs for commonsense reasoning. In order to retrieve reasoning paths over Wikipedia, Godbole et al [13] used entity linking for multi-hop retrieval. Asai et al [1] utilized the wikipedia hyperlinks to construct the Wikipedia graph which helps to identify the reasoning path.…”
Section: Knowledge In Retrieval-based Qa Modelsmentioning
confidence: 99%