Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) 2020
DOI: 10.18653/v1/2020.emnlp-main.351
|View full text |Cite
|
Sign up to set email alerts
|

Content Planning for Neural Story Generation with Aristotelian Rescoring

Abstract: Long-form narrative text generated from large language models manages a fluent impersonation of human writing, but only at the local sentence level, and lacks structure or global cohesion. We posit that many of the problems of story generation can be addressed via highquality content planning, and present a system that focuses on how to learn good plot structures to guide story generation. We utilize a plot-generation language model along with an ensemble of rescoring models that each implement an aspect of go… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
54
0
1

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
4
1

Relationship

3
6

Authors

Journals

citations
Cited by 65 publications
(55 citation statements)
references
References 26 publications
0
54
0
1
Order By: Relevance
“…Early narrative prose generation systems (Meehan, 1977;Callaway and Lester, 2001;Riedl and Young, 2004) relied on graph-based planning formalisms and custom rules to structure their narratives, while story graphs have been used for interactive storytelling (Riedl and Bulitko, 2013). More recent work uses deep learning to generate stories by training neural models with limited context (Peng et al, 2018;Fan et al, 2018;Goldfarb-Tarrant et al, 2019) and structured knowledge, either external (Mao et al, 2019;Guan et al, 2020;Goldfarb-Tarrant et al, 2020) or derived (Yao et al, 2019;Fan et al, 2019). Compared to the datasets studied in those works, our STORIUM dataset contains much longer stories with built-in structural annotations written in natural language in the form of cards (Table 2).…”
Section: Related Workmentioning
confidence: 99%
“…Early narrative prose generation systems (Meehan, 1977;Callaway and Lester, 2001;Riedl and Young, 2004) relied on graph-based planning formalisms and custom rules to structure their narratives, while story graphs have been used for interactive storytelling (Riedl and Bulitko, 2013). More recent work uses deep learning to generate stories by training neural models with limited context (Peng et al, 2018;Fan et al, 2018;Goldfarb-Tarrant et al, 2019) and structured knowledge, either external (Mao et al, 2019;Guan et al, 2020;Goldfarb-Tarrant et al, 2020) or derived (Yao et al, 2019;Fan et al, 2019). Compared to the datasets studied in those works, our STORIUM dataset contains much longer stories with built-in structural annotations written in natural language in the form of cards (Table 2).…”
Section: Related Workmentioning
confidence: 99%
“…To overcome this, we introduce a rescoring model during the decoding process to favor more metaphorical verbs. The rescoring model is inspired by Holtzman et al (2018); Goldfarb-Tarrant et al (2020) and detailed in the next section.…”
Section: Transfer Learning From Bartmentioning
confidence: 99%
“…Convlines are abstract representations or content plans of utterances throughout the conversation. These representations, which are also known as storylines or story plots in the context of story generation, have recently posited their efficiency in generating higher quality stories Fan et al, 2019;Goldfarb-Tarrant et al, 2020;Rashkin et al, 2020). Story generation models leverage plan-and-write framework that is successful in generating fluent and informative stories by the intervention of storylines as an intermediate step.…”
Section: Convline Generatormentioning
confidence: 99%
“…Content planning has been shown to be beneficial in the story generation task. These abstract representations known as storylines or story plots have been successful to guide the language models produce more coherent and fluent stories Goldfarb-Tarrant et al, 2019;Fan et al, 2019;Goldfarb-Tarrant et al, 2020;Rashkin et al, 2020;Brahman et al, 2020).…”
Section: Introductionmentioning
confidence: 99%