Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing 2016
DOI: 10.18653/v1/d16-1032
|View full text |Cite
|
Sign up to set email alerts
|

Globally Coherent Text Generation with Neural Checklist Models

Abstract: Recurrent neural networks can generate locally coherent text but often have difficulties representing what has already been generated and what still needs to be said -especially when constructing long texts. We present the neural checklist model, a recurrent neural network that models global coherence by storing and updating an agenda of text strings which should be mentioned somewhere in the output. The model generates output by dynamically adjusting the interpolation among a language model and a pair of atte… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
220
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 203 publications
(220 citation statements)
references
References 22 publications
0
220
0
Order By: Relevance
“…Many techniques have been proposed to improve the performance of basic sequence-to-sequence architecture by allowing more encoder side information during decoding e.g. using attention [Bahdanau et al, 2015], neural checklists [Kiddon et al, 2016], pointer networks [Vinyals et al, 2015a].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Many techniques have been proposed to improve the performance of basic sequence-to-sequence architecture by allowing more encoder side information during decoding e.g. using attention [Bahdanau et al, 2015], neural checklists [Kiddon et al, 2016], pointer networks [Vinyals et al, 2015a].…”
Section: Related Workmentioning
confidence: 99%
“…Sequence-to-sequence models which consider extra signal from the input sequence to modify the output token generation have been proposed recently in various context e.g. for -18) recipe generation [Kiddon et al, 2016], optimization [Vinyals et al, 2015a], semantic parsing [Jia and Liang, 2016]. However, our argument transfer model is architecturally different from all the aforementioned models.…”
Section: Trainingmentioning
confidence: 99%
“…In terms of network architecture, Wen et al (2015) equip LSTM with a semantic control cell to improve informativeness of generated sentence. Kiddon et al (2016) propose the neural checklist model to explicitly track what has been mentioned and what left to say by splitting these two into different lists. Our model is related to these models with respect to information representation and challenges from coverage and redundancy.…”
Section: Related Workmentioning
confidence: 99%
“…However, while neural methods are effective for generation of individual sentences conditioned on some context, they struggle with coherence when used to generate longer texts (Kiddon et al, 2016). In addition, it is challenging to apply neural models in less constrained generation tasks with many valid solutions, such as open-domain dialogue and story continuation.…”
Section: Introductionmentioning
confidence: 99%