Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) 2018
DOI: 10.18653/v1/p18-1078
|View full text |Cite
|
Sign up to set email alerts
|

Simple and Effective Multi-Paragraph Reading Comprehension

Abstract: We consider the problem of adapting neural paragraph-level question answering models to the case where entire documents are given as input. Our proposed solution trains models to produce well calibrated confidence scores for their results on individual paragraphs. We sample multiple paragraphs from the documents during training, and use a sharednormalization training objective that encourages the model to produce globally correct output. We combine this method with a stateof-the-art pipeline for training model… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

3
359
0

Year Published

2019
2019
2020
2020

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 345 publications
(371 citation statements)
references
References 30 publications
3
359
0
Order By: Relevance
“…Previous work has dealt with this setting by detecting spans in the document through text matching (Joshi et al, 2017;Clark and Gardner, 2018). Following previous approaches, we define a solution z as a span in the document.…”
Section: Multi-mention Reading Comprehensionmentioning
confidence: 99%
See 1 more Smart Citation
“…Previous work has dealt with this setting by detecting spans in the document through text matching (Joshi et al, 2017;Clark and Gardner, 2018). Following previous approaches, we define a solution z as a span in the document.…”
Section: Multi-mention Reading Comprehensionmentioning
confidence: 99%
“…by selecting the first answer span in TRIVIAQA (Joshi et al, 2017;Tay et al, 2018;Talmor and Berant, 2019)). Some models are trained with maximum marginal likelihood (MML) (Kadlec et al, 2016;Swayamdipta et al, 2018;Clark and Gardner, 2018;, but it is unclear if it gives a meaningful improvement over the heuristics.…”
Section: Introductionmentioning
confidence: 99%
“…Machine Reading at Scale First proposed and formalized in Chen et al (2017), MRS has gained popularity with increasing amount of work on both dataset collection (Joshi et al, 2017; and MRS model developments (Wang et al, 2018;Clark and Gardner, 2017;Htut et al, 2018). In some previous work , paragraph-level retrieval modules were mainly for improving the recall of required information, while in some other works (Yang et al, 2018), sentence-level retrieval modules were merely for solving the auxiliary sentence selection task.…”
Section: Related Workmentioning
confidence: 99%
“…Work in that direction includes (Watanabe et al, 2017) which present a neural information retrieval system to retrieve a sufficiently small paragraph and (Geva and Berant, 2018) which employ a Deep Q-Network (DQN) to solve the task by learning to navigate over an intra-document tree. A similar approach is chosen by (Clark and Gardner, 2017). However, instead of operating on document structure, they adopt a sampling technique to make the model more robust towards multi-paragraph documents.…”
Section: Related Workmentioning
confidence: 99%