Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conferen 2019
DOI: 10.18653/v1/d19-1317
|View full text |Cite
|
Sign up to set email alerts
|

Improving Question Generation With to the Point Context

Abstract: Question generation (QG) is the task of generating a question from a reference sentence and a specified answer within the sentence. A major challenge in QG is to identify answer-relevant context words to finish the declarative-to-interrogative sentence transformation. Existing sequence-to-sequence neural models achieve this goal by proximity-based answer position encoding under the intuition that neighboring words of answers are of high possibility to be answer-relevant. However, such intuition may not apply t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
17
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
3
2

Relationship

1
7

Authors

Journals

citations
Cited by 35 publications
(17 citation statements)
references
References 23 publications
0
17
0
Order By: Relevance
“…In order to make use of the context information of paragraphs, Zhao et al (2018b) propose a gated self-attention network to encode context passage. Based on this, Zhang and Bansal (2019) apply reinforcement learning to deal with semantic drift in QG; Nema et al (2019) use a passage-answer fusion mechanism to obtain answer-focused context representations; Li et al (2019a) utilize gated attention to fuse answer-relevant relation with context sentence. Besides, Chen et al (2019) design different passage graphs to capture structure information of passage through graph neural networks.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…In order to make use of the context information of paragraphs, Zhao et al (2018b) propose a gated self-attention network to encode context passage. Based on this, Zhang and Bansal (2019) apply reinforcement learning to deal with semantic drift in QG; Nema et al (2019) use a passage-answer fusion mechanism to obtain answer-focused context representations; Li et al (2019a) utilize gated attention to fuse answer-relevant relation with context sentence. Besides, Chen et al (2019) design different passage graphs to capture structure information of passage through graph neural networks.…”
Section: Related Workmentioning
confidence: 99%
“…On SQuAD, since there are two different splits that are most often used, we conduct experiments on both two splits on sentence-level. For Du Split (Du et al, 2017), we use the same settings with Li et al (2019a) and there are 74689, 10427 and 11609 sentence-question-answer triples for training, validation and test respectively. For Zhou Split , we use the data shared by and there are 86,635, 8,965 and 8,964 triples correspondingly.…”
Section: Datasetsmentioning
confidence: 99%
See 1 more Smart Citation
“…As follow-up, ; Sun et al (2018); Kim et al (2019) propose to utilize the answers to decrease the generation uncertainty. Meanwhile, and Li et al (2019) explore to use answer-relevant context to guide question generation. Besides, some studies (Wang et al, 2017;Wang et al, 2019) take question generation as a subtask, and jointly learn it with other tasks, such as question answering and phrase extraction, which also help to alleviate the uncertainty and improve the generation performance.…”
Section: Related Workmentioning
confidence: 99%
“…Question generation (QG) is an emerging research topic due to its wide application scenarios such as education , goal-oriented dialogue (Lee et al, 2018), and question answering . The preliminary neural QG models outperform the rule-based methods relying on hand-craft features, and thereafter various models have been proposed to further improve the performance via incorporating question type (Dong et al, 2018), answer position , long passage modeling (Zhao et al, 2018b), question difficulty , and to the point context (Li et al, 2019). Some works try to find the possible answer text spans for facilitating the learning .…”
Section: Related Workmentioning
confidence: 99%