Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) 2018
DOI: 10.18653/v1/p18-1152
|View full text |Cite
|
Sign up to set email alerts
|

Learning to Write with Cooperative Discriminators

Abstract: Despite their local fluency, long-form text generated from RNNs is often generic, repetitive, and even self-contradictory. We propose a unified learning framework that collectively addresses all the above issues by composing a committee of discriminators that can guide a base RNN generator towards more globally coherent generations. More concretely, discriminators each specialize in a different principle of communication, such as Grice's maxims, and are collectively combined with the base RNN generator through… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
261
0
2

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 316 publications
(398 citation statements)
references
References 37 publications
1
261
0
2
Order By: Relevance
“…In neural text generation, it is less easy to impose a narrative structure on the generated texts -unless the task is split into two steps, like in the work of Yao et al (2019). An alternative way improve the global coherence in texts generated with recurring neural networks was proposed by Holtzman et al (2018), who used a set of discriminators to encode various aspects of proper writing.…”
Section: Narrative Coherencementioning
confidence: 99%
“…In neural text generation, it is less easy to impose a narrative structure on the generated texts -unless the task is split into two steps, like in the work of Yao et al (2019). An alternative way improve the global coherence in texts generated with recurring neural networks was proposed by Holtzman et al (2018), who used a set of discriminators to encode various aspects of proper writing.…”
Section: Narrative Coherencementioning
confidence: 99%
“…BookCorpus BookCorpus is a set of unpublished novels (Romance, Fantasy, Science fiction, and Teen genres) collected by Zhu et al (2015). We use a publicly available pre-trained BookCorpus language model from Holtzman et al (2018).…”
Section: Training Datamentioning
confidence: 99%
“…In neural text generation, it is less easy to impose a narrative structure on the generated texts -unless the task is split into two steps, like in the work of . An alternative way improve the global coherence in texts generated with recurring neural networks was proposed by Holtzman et al (2018), who used a set of discriminators to encode various aspects of proper writing.…”
Section: Narrative Coherencementioning
confidence: 99%