Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics 2020
DOI: 10.18653/v1/2020.acl-main.599
|View full text |Cite
|
Sign up to set email alerts
|

Document Modeling with Graph Attention Networks for Multi-grained Machine Reading Comprehension

Abstract: Natural Questions is a new challenging machine reading comprehension benchmark with two-grained answers, which are a long answer (typically a paragraph) and a short answer (one or more entities inside the long answer). Despite the effectiveness of existing methods on this benchmark, they treat these two sub-tasks individually during training while ignoring their dependencies. To address this issue, we present a novel multi-grained machine reading comprehension framework that focuses on modeling documents at th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
26
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 46 publications
(28 citation statements)
references
References 33 publications
0
26
0
Order By: Relevance
“…However, such kind of linking information for creating triples is not necessarily prominent in documents other than Wikipedia. Some works segment the document content based on its semantic structures and rank them based on their relevance to the query (Yan et al, 2019;Lee et al, 2018;Wang et al, 2018;Zheng et al, 2020;.…”
Section: Sequence Viewmentioning
confidence: 99%
“…However, such kind of linking information for creating triples is not necessarily prominent in documents other than Wikipedia. Some works segment the document content based on its semantic structures and rank them based on their relevance to the query (Yan et al, 2019;Lee et al, 2018;Wang et al, 2018;Zheng et al, 2020;.…”
Section: Sequence Viewmentioning
confidence: 99%
“…Architecturally, the latter is similar to us but we propose more technical novelty in terms of both improved attention and data augmentation. We note there is very recent academic work (Zheng et al, 2020)which we omit as GAAMA outperforms them on short answers and more importantly we compare against large scale industry SOTA for the scope of this paper. Since their work is more academic, their model enjoys being computationally more expensive for accuracy than GAAMA as they involve computing graph attentions that are typically more difficult to be run in parallel if we want to do whole graph propagation (Veličković et al, 2018).…”
Section: Competitorsmentioning
confidence: 99%
“…Machine Reading Comprehension. Machine reading comprehension (MRC) (Rajpurkar et al, 2016) has received increasing attention recently, which requires a model to extract an answer span to a question from reference documents (Yu et al, 2018;Devlin et al, 2019;Zheng et al, 2020;Yuan et al, 2020). Owing to the rise of pre-training models (Devlin et al, 2018), a machine is able to achieve highly competitive results on classic datasets (e.g.…”
Section: Related Workmentioning
confidence: 99%