Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conferen 2019
DOI: 10.18653/v1/d19-1253
|View full text |Cite
|
Sign up to set email alerts
|

Addressing Semantic Drift in Question Generation for Semi-Supervised Question Answering

Abstract: Text-based Question Generation (QG) aims at generating natural and relevant questions that can be answered by a given answer in some context. Existing QG models suffer from a "semantic drift" problem, i.e., the semantics of the model-generated question drifts away from the given context and answer. In this paper, we first propose two semantics-enhanced rewards obtained from downstream question paraphrasing and question answering tasks to regularize the QG model to generate semantically valid questions. Second,… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
133
1

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 113 publications
(151 citation statements)
references
References 58 publications
0
133
1
Order By: Relevance
“…• NQG-Knowledge [16], DLPH [12]: auxiliary-informationenhanced question generation models with extra inputs such as knowledge or difficulty. • Self-training-EE [38], BERT-QG-QAP [51], NQG-LM [55], CGC-QG [27] and QType-Predict [56]: multi-task question generation models with auxiliary tasks such as question answering, language modeling, question type prediction and so on.…”
Section: Evaluating Acs-aware Question Generationmentioning
confidence: 99%
See 1 more Smart Citation
“…• NQG-Knowledge [16], DLPH [12]: auxiliary-informationenhanced question generation models with extra inputs such as knowledge or difficulty. • Self-training-EE [38], BERT-QG-QAP [51], NQG-LM [55], CGC-QG [27] and QType-Predict [56]: multi-task question generation models with auxiliary tasks such as question answering, language modeling, question type prediction and so on.…”
Section: Evaluating Acs-aware Question Generationmentioning
confidence: 99%
“…[55] incorporates language modeling task to help question generation. [51] utilizes question paraphrasing and question answering tasks to regularize the QG model to generate semantically valid questions.…”
Section: Related Workmentioning
confidence: 99%
“…Rewards. We use ROUGE-L, QPP, and QAP (Zhang and Bansal, 2019) as rewards for this task. QPP is calculated as the probability of the generated question being the paraphrase of the ground-truth question via a classifier trained on Quora Question Pairs.…”
Section: Question Generationmentioning
confidence: 99%
“…For pre-processing, we do standard tokenization. We report on evaluation metrics including BLEU-4, METEOR, ROUGE-L, Q-BLEU1 (Nema and Khapra, 2018), as well as QPP and QAP (Zhang and Bansal, 2019).…”
Section: Question Generationmentioning
confidence: 99%
“…Synthetic data generation has also been used for question answering (QA) tasks [2,19,43,47]. Much effort has focused on the machine reading comprehension (MRC) variant of QA, where questions should be answered in the context of a prose paragraph.…”
Section: Introductionmentioning
confidence: 99%