Proceedings of the 2019 Conference of the North 2019
DOI: 10.18653/v1/n19-1270
|View full text |Cite
|
Sign up to set email alerts
|

Improving Machine Reading Comprehension with General Reading Strategies

et al.

Abstract: This work was done when K. S. was an intern at the Tencent AI Lab, Bellevue, WA. our fine-tuned models that incorporate these strategies. Core code is available at https: //github.com/nlpdata/strategy/.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
79
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 107 publications
(84 citation statements)
references
References 42 publications
0
79
0
Order By: Relevance
“…We demonstrate its effectiveness on two challenging multi-sentence reasoning datasets: MultiRC (Khashabi et al, 2018) and OpenBookQA (Mihaylov et al, 2018). Multee using ELMo contextual embeddings (Peters et al, 2018) matches state-of-the-art results achieved with large transfomer-based models (Radford et al, 2018) that were trained on a sequence of large scale tasks (Sun et al, 2019).…”
Section: Our Implementation Ofmentioning
confidence: 64%
See 1 more Smart Citation
“…We demonstrate its effectiveness on two challenging multi-sentence reasoning datasets: MultiRC (Khashabi et al, 2018) and OpenBookQA (Mihaylov et al, 2018). Multee using ELMo contextual embeddings (Peters et al, 2018) matches state-of-the-art results achieved with large transfomer-based models (Radford et al, 2018) that were trained on a sequence of large scale tasks (Sun et al, 2019).…”
Section: Our Implementation Ofmentioning
confidence: 64%
“…While ELMo contextual embeddings helped in MultiRC, it did not help OpenBookQA. We believe this is in part due to the mismatch between our ELMo training setup where all sentences are treated as a single sequence, which, while true in 8 Published on arXiv on Oct 31, 2018 (Sun et al, 2019). MultiRC, is not the case in OpenBookQA.…”
Section: Resultsmentioning
confidence: 99%
“…It is worth noting that recent large-scale language models (LMs) (Devlin et al, 2019;Radford et al, 2018) have now been applied on this task, leading to improved state-of-the-art results (Sun et al, 2018;Banerjee et al, 2019;Pan et al, 2019). However, our knowledge-gap guided approach to QA is orthogonal to the underlying model.…”
Section: Openbookqa Resultsmentioning
confidence: 99%
“…Multi-hop RC: There exist several different data sets that require reasoning in multiple steps in literature, for example bAbI (Weston et al, 2015), MultiRC (Khashabi et al, 2018) and Open-BookQA (Mihaylov et al, 2018). A lot of systems have been proposed to solve the multi-hop RC problem with these data sets (Sun et al, 2018;Wu et al, 2019). However, these data sets require multi-hop reasoning over multiple sentences or multiple common knowledge while the problem we want to solve in this paper requires collecting evidences across multiple documents.…”
Section: Related Workmentioning
confidence: 99%