Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics 2019
DOI: 10.18653/v1/p19-1254
|View full text |Cite
|
Sign up to set email alerts
|

Strategies for Structuring Story Generation

Abstract: Writers often rely on plans or sketches to write long stories, but most current language models generate word by word from left to right. We explore coarse-to-fine models for creating narrative texts of several hundred words, and introduce new models which decompose stories by abstracting over actions and entities. The model first generates the predicate-argument structure of the text, where different mentions of the same entity are marked with placeholder tokens. It then generates a surface realization of the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

1
160
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
5
5

Relationship

0
10

Authors

Journals

citations
Cited by 158 publications
(161 citation statements)
references
References 31 publications
1
160
0
Order By: Relevance
“…Recent studies shows that the modeling ability of multi-head attention has not been completely developed. Several specific guidance cues of different heads without breaking the vanilla multi-head attention mechanism can further boost the performance, e.g., disagreement regularization (Li et al, 2018;Tao et al, 2018), information aggregation (Li et al, 2019a), and functional specialization (Fan et al, 2019) on attention heads, the combination of multi-head attention with multi-task learning (Strubell et al, 2018). Our work demonstrates that multi-head attention also benefits from the integration of the phrase information.…”
Section: Related Workmentioning
confidence: 72%
“…Recent studies shows that the modeling ability of multi-head attention has not been completely developed. Several specific guidance cues of different heads without breaking the vanilla multi-head attention mechanism can further boost the performance, e.g., disagreement regularization (Li et al, 2018;Tao et al, 2018), information aggregation (Li et al, 2019a), and functional specialization (Fan et al, 2019) on attention heads, the combination of multi-head attention with multi-task learning (Strubell et al, 2018). Our work demonstrates that multi-head attention also benefits from the integration of the phrase information.…”
Section: Related Workmentioning
confidence: 72%
“…Other work has looked for innovative ways to separate planning and surface realization from the end-to-end neural systems, most notably Wiseman et al (2018) which learns template generation also on the E2E task, but does not yet match baseline performance, and He et al (2018) which has a dialogue manager control decision making and passes this information onto a secondary language generator. Other work has attempted either multi-stage semi-unconstrained language generation, such as in the domain of story telling (Fan et al, 2019), or filling-in-the-blanks style sentence reconstruction (Fedus et al, 2018).…”
Section: Related Workmentioning
confidence: 99%
“…For instance, [9] generate a coherent story from independent descriptions, describing a scene or an event. [5] explore a strategy for story generation. Their frameworks both use sequence-to-sequence neural networks.…”
Section: Related Work 21 Story Generationmentioning
confidence: 99%