Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing 2018
DOI: 10.18653/v1/d18-1427
|View full text |Cite
|
Sign up to set email alerts
|

Answer-focused and Position-aware Neural Question Generation

Abstract: In this paper, we focus on the problem of question generation (QG). Recent neural networkbased approaches employ the sequence-tosequence model which takes an answer and its context as input and generates a relevant question as output. However, we observe two major issues with these approaches: (1) The generated interrogative words (or question words) do not match the answer type. (2) The model copies the context words that are far from and irrelevant to the answer, instead of the words that are close and relev… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
144
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 153 publications
(152 citation statements)
references
References 15 publications
1
144
0
Order By: Relevance
“…The baseline model is an attention-based seq2seq pointer-generator reinforced by lexical features, like the work of Sun et al (2018). In our proposed model, we employ multi-task learning with language modeling as an auxiliary task for QG.…”
Section: Model Descriptionmentioning
confidence: 99%
See 1 more Smart Citation
“…The baseline model is an attention-based seq2seq pointer-generator reinforced by lexical features, like the work of Sun et al (2018). In our proposed model, we employ multi-task learning with language modeling as an auxiliary task for QG.…”
Section: Model Descriptionmentioning
confidence: 99%
“…In this work, we propose to incorporate language modeling as an auxiliary task to help QG via multi-task learning. We adopt the pointergenerator (See et al, 2017) reinforced with features as the baseline model, which yields stateof-the-art result (Sun et al, 2018). The language modeling task is to predict the next word and the previous word whose input is a plain text without relying on any annotation.…”
Section: Introductionmentioning
confidence: 99%
“…Several works have addressed this issue. Sun et al (2018) incorporated a question word generation mode to generate question word at each decoding step, which utilized the answer information by employing the encoder hidden states at the answer start position. However, their method did not consider the structure and lexical features of answer.…”
Section: Introductionmentioning
confidence: 99%
“…In summary, their intuition that "the neighboring words of the answer are more likely to be answer-relevant and have a higher chance to be used in the question" is not reliable. To quantitatively show this drawback of these models, we implement the approach proposed by Sun et al (2018) and analyze its performance under different relative distances between the answer and other non-stop sentence words that also appear in the ground truth question. The results are shown in Table 1.…”
Section: Introductionmentioning
confidence: 99%