Proceedings of the 2018 Conference of the North American Chapter Of the Association for Computational Linguistics: St 2018
DOI: 10.18653/v1/n18-4012
|View full text |Cite
|
Sign up to set email alerts
|

Read and Comprehend by Gated-Attention Reader with More Belief

Abstract: Gated-Attention (GA) Reader has been effective for reading comprehension. GA Reader makes two assumptions: (1) a uni-directional attention that uses an input query to gate token encodings of a document; (2) encoding at the cloze position of an input query is considered for answer prediction. In this paper, we propose Collaborative Gating (CG) and Self-Belief Aggregation (SBA) to address the above assumptions respectively. In CG, we first use an input document to gate token encodings of an input query so that t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2019
2019
2019
2019

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 16 publications
(19 reference statements)
0
1
0
Order By: Relevance
“…Considerable volume of research work has looked into various Question Answering (QA) settings, ranging from retrieval-based QA (Voorhees, 2001) to recent neural approaches that reason over Knowledge Bases (KB) (Bordes et al, 2014), or raw text (Shen et al, 2017;Deng and Tam, 2018;Min et al, 2018). In this paper we use the Nar-rativeQA corpus (Kocisky et al, 2018) as a starting point and focus on the task of answering questions from the full text of books, which we call BookQA.…”
Section: Introductionmentioning
confidence: 99%
“…Considerable volume of research work has looked into various Question Answering (QA) settings, ranging from retrieval-based QA (Voorhees, 2001) to recent neural approaches that reason over Knowledge Bases (KB) (Bordes et al, 2014), or raw text (Shen et al, 2017;Deng and Tam, 2018;Min et al, 2018). In this paper we use the Nar-rativeQA corpus (Kocisky et al, 2018) as a starting point and focus on the task of answering questions from the full text of books, which we call BookQA.…”
Section: Introductionmentioning
confidence: 99%