Proceedings of the 2018 Conference of the North American Chapter Of the Association for Computational Linguistics: Hu 2018
DOI: 10.18653/v1/n18-2092
|View full text |Cite
|
Sign up to set email alerts
|

Simple and Effective Semi-Supervised Question Answering

Abstract: Recent success of deep learning models for the task of extractive Question Answering (QA) is hinged on the availability of large annotated corpora. However, large domain specific annotated corpora are limited and expensive to construct. In this work, we envision a system where the end user specifies a set of base documents and only a few labelled examples. Our system exploits the document structure to create cloze-style questions from these base documents; pre-trains a powerful neural network on the cloze styl… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
70
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 66 publications
(75 citation statements)
references
References 23 publications
1
70
0
Order By: Relevance
“…Table 1 shows the result in context of published baselines and supervised results. Our approach significantly outperforms baseline systems and Dhingra et al (2018) and surpasses early supervised methods.…”
Section: Em F1mentioning
confidence: 86%
See 2 more Smart Citations
“…Table 1 shows the result in context of published baselines and supervised results. Our approach significantly outperforms baseline systems and Dhingra et al (2018) and surpasses early supervised methods.…”
Section: Em F1mentioning
confidence: 86%
“…As we cannot assume access to a development dataset when training unsupervised models, the QA model training is halted when QA performance on a held-out set of synthetic QA data plateaus. We do, however, use the SQuAD development set to assess which model components are (Dhingra et al, 2018) 3.2 † 6.8 † BiDAF+SA (Dhingra et al, 2018) ‡ 10.0* 15.0* BERT-Large (Dhingra et al, 2018) ‡ 28.4* 35.8*…”
Section: Unsupervised Qa Experimentsmentioning
confidence: 99%
See 1 more Smart Citation
“…There are limited studies on semi-supervised learning on RC tasks Dhingra et al, 2018). In this section, we explore this possibility with virtual adversarial training.…”
Section: Is Semi-supervised Learning Helpful?mentioning
confidence: 99%
“…Another common technique is DA (Zhang and Bansal, 2019) by artificially generating more questions to enhance the training data or in a MTL setup (Yatskar, 2018;Dhingra et al, 2018;. (Alberti et al, 2019a;Alberti et al, 2019b) combine models of question generation with answer extraction and filter results to ensure round-trip consistency to get the SOTA on NQ.…”
Section: Related Workmentioning
confidence: 99%