Proceedings of the 55th Annual Meeting of the Association For Computational Linguistics (Volume 1: Long Papers) 2017
DOI: 10.18653/v1/p17-1018
|View full text |Cite
|
Sign up to set email alerts
|

Gated Self-Matching Networks for Reading Comprehension and Question Answering

Abstract: In this paper, we present the gated selfmatching networks for reading comprehension style question answering, which aims to answer questions from a given passage. We first match the question and passage with gated attention-based recurrent networks to obtain the question-aware passage representation. Then we propose a self-matching attention mechanism to refine the representation by matching the passage against itself, which effectively encodes information from the whole passage. We finally employ the pointer … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
417
0

Year Published

2018
2018
2020
2020

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 604 publications
(417 citation statements)
references
References 19 publications
0
417
0
Order By: Relevance
“…In recent years, researchers have published a large number of annotated MRC datasets such as CNN/Daily Mail (Hermann et al, 2015), SQuAD (Rajpurkar et al, 2016), RACE (Lai et al, 2017), Trivi-aQA (Joshi et al, 2017) and so on. With the blooming of available large-scale MRC datasets, a great number of neural network-based MRC models have been proposed to answer questions for a given document including Attentive Reader (Kadlec et al, 2016), BiDAF (Seo et al, 2017), Interactive AoA Reader (Cui et al, 2017), Gated Attention Reader (Dhingra et al, 2017), R-Net (Wang et al, 2017a), DCN (Xiong et al, 2017), QANet (Yu et al, 2018), and achieve promising results in most existing public MRC datasets.…”
Section: Machine Reading Comprehensionmentioning
confidence: 99%
“…In recent years, researchers have published a large number of annotated MRC datasets such as CNN/Daily Mail (Hermann et al, 2015), SQuAD (Rajpurkar et al, 2016), RACE (Lai et al, 2017), Trivi-aQA (Joshi et al, 2017) and so on. With the blooming of available large-scale MRC datasets, a great number of neural network-based MRC models have been proposed to answer questions for a given document including Attentive Reader (Kadlec et al, 2016), BiDAF (Seo et al, 2017), Interactive AoA Reader (Cui et al, 2017), Gated Attention Reader (Dhingra et al, 2017), R-Net (Wang et al, 2017a), DCN (Xiong et al, 2017), QANet (Yu et al, 2018), and achieve promising results in most existing public MRC datasets.…”
Section: Machine Reading Comprehensionmentioning
confidence: 99%
“…In reading comprehension literature, self-attention has been investigated. (Wang et al, 2017b) proposed a Gated Self-Matching mechanism which produced context-enhanced token encodings in a document. In this paper, we have a different angle for applying self-attention.…”
Section: Related Workmentioning
confidence: 99%
“…Several methods have been proposed to apply deep neural networks for effective information retrieval [12,17,30] and question answering [42,47], also with focus on healthcare [25,52]. However, our CDSS scenario poses a unique combination of open challenges to a retrieval system:…”
Section: Introductionmentioning
confidence: 99%