2018
DOI: 10.1609/aaai.v32i1.12053
|View full text |Cite
|
Sign up to set email alerts
|

R<sup>3</sup>: Reinforced Ranker-Reader for Open-Domain Question Answering

Abstract: In recent years researchers have achieved considerable success applying neural network methods to question answering (QA). These approaches have achieved state of the art results in simplified closed-domain settings such as the SQuAD (Rajpurkar et al. 2016) dataset, which provides a pre-selected passage, from which the answer to a given question may be extracted. More recently, researchers have begun to tackle open-domain QA, in which the model is given a question and access to a large corpus (e.g., wikipedia)… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
42
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 120 publications
(43 citation statements)
references
References 20 publications
1
42
0
Order By: Relevance
“…The advanced reading comprehension model (Chen et al, 2017a;Banerjee et al, 2019) split this complex task into two steps: a retriever selects some most relevant documents from a corpus to a question, and a reader produces answer according to the documents from retriever. Some previous work (Kratzwald and Feuerriegel, 2018;Lee et al, 2018;Das et al, 2019;Wang et al, 2018) train the end-to-end models to rerank in a closed set. Although these mod-els are better at retrieval, they can hardly scale to larger corpora.…”
Section: Related Workmentioning
confidence: 99%
“…The advanced reading comprehension model (Chen et al, 2017a;Banerjee et al, 2019) split this complex task into two steps: a retriever selects some most relevant documents from a corpus to a question, and a reader produces answer according to the documents from retriever. Some previous work (Kratzwald and Feuerriegel, 2018;Lee et al, 2018;Das et al, 2019;Wang et al, 2018) train the end-to-end models to rerank in a closed set. Although these mod-els are better at retrieval, they can hardly scale to larger corpora.…”
Section: Related Workmentioning
confidence: 99%
“…Models for open-domain QA often follow a two-stage process: (1) A retriever selects a small collection of documents relevant to the question from a big corpus (e.g., Wikipedia), (2) a reader extracts or generates an answer from the selected documents. While classical approaches rely on counting-based bag-of-words representations like TF-IDF or BM25 (Chen et al, 2017;Wang et al, 2018;Yang et al, 2019), more recent deep learning approaches learn dense representations of the questions and document through a dual-encoder framework Karpukhin et al, 2020). In such learned retriever setups, document retrieval is done efficiently using Maximum Inner Product Search (MIPS, Shrivastava and Li, 2014).…”
Section: Open-domain Question Answeringmentioning
confidence: 99%
“…1) Candidate set representation: To represent the entity span candidate set C, we propose to encode the information of both entity type y and sequence X into p C . Specifically, we build entity-aware sequence representation with Match-LSTM (Wang et al, 2018b), by matching the entity marker up with the sequence:…”
Section: Discriminatormentioning
confidence: 99%