Proceedings of the Second Workshop on Storytelling 2019
DOI: 10.18653/v1/w19-3404
|View full text |Cite
|
Sign up to set email alerts
|

A Hybrid Model for Globally Coherent Story Generation

Abstract: Automatically generating globally coherent stories is a challenging problem. Neural text generation models have been shown to perform well at generating fluent sentences from data, but they usually fail to keep track of the overall coherence of the story after a couple of sentences. Existing work that incorporates a text planning module succeeded in generating recipes and dialogues, but appears quite datademanding. We propose a novel story generation approach that generates globally coherent stories from a fai… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
16
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
3
1
1

Relationship

1
4

Authors

Journals

citations
Cited by 19 publications
(18 citation statements)
references
References 14 publications
(13 reference statements)
0
16
0
Order By: Relevance
“…Baselines We evaluate our model against three baselines: (1) the GRU-based model from Zhai et al (2019), as it also takes an ordered agenda as input;…”
Section: Discussionmentioning
confidence: 99%
See 3 more Smart Citations
“…Baselines We evaluate our model against three baselines: (1) the GRU-based model from Zhai et al (2019), as it also takes an ordered agenda as input;…”
Section: Discussionmentioning
confidence: 99%
“…Other external story generation systems we considered are not suitable either because they do not admit a compatible input format (i.e. taking a linear agenda as input) or that they cannot fully exploit the temporal order information encoded in an linear agenda (such systems tend to perform poorly on INSCRIPT based story generation (see Zhai et al (2019)).…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…Kiddon et al (2016) avoid the need to represent each of the desired sentences by maintaining a checklist of required words and implementing a gating mechanism to insert these words and track which were used, demonstrating these abilities on cooking recipes. Zhai et al (2019) develop this notion further by using events as the ingredients in the checklist, but also conditioning the generated text on the desired next event, concluding that their model still generates shorter stories with less event coverage then those produced by humans. Fan et al (2018) collect a corpus of writing prompts and their appropriate stories as generated by Reddit users.…”
Section: Plotline Representations In Neural Story Generationmentioning
confidence: 99%