2018
DOI: 10.48550/arxiv.1805.08352
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Controlling Personality-Based Stylistic Variation with Neural Natural Language Generators

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 0 publications
0
2
0
Order By: Relevance
“…For example, LIGHT contains far more male-gendered words than female-gendered words rather than an even split between words of both genders. To create models that can generate a gender-balanced number of gendered words, we propose Conditional Training (CT) for controlling generative model output (Kikuchi et al, 2016;Fan et al, 2017;Oraby et al, 2018;See et al, 2019). Previous work proposed a mechanism to train models with specific control tokens so models learn to associate the control token with the desired text properties (Fan et al, 2017), then modifying the control tokens during inference to produce the desired result.…”
Section: Conditional Trainingmentioning
confidence: 99%
“…For example, LIGHT contains far more male-gendered words than female-gendered words rather than an even split between words of both genders. To create models that can generate a gender-balanced number of gendered words, we propose Conditional Training (CT) for controlling generative model output (Kikuchi et al, 2016;Fan et al, 2017;Oraby et al, 2018;See et al, 2019). Previous work proposed a mechanism to train models with specific control tokens so models learn to associate the control token with the desired text properties (Fan et al, 2017), then modifying the control tokens during inference to produce the desired result.…”
Section: Conditional Trainingmentioning
confidence: 99%
“…In NLG for spoken dialogue systems, the trainable sentence planner proposed in (Walker et al, 2002;Stent et al, 2004) provides the flexibility of adapting to different domains. Subsequently, generators that can tailor user preferences (Walker et al, 2007) or learn their personality traits Walker, 2008, 2011;Oraby et al, 2018) were proposed. To achieve multi-domain NLG, exploiting the shared knowledge between domains is important to handle unseen semantics.…”
Section: Related Workmentioning
confidence: 99%