2017
DOI: 10.48550/arxiv.1707.02633
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Controlling Linguistic Style Aspects in Neural Language Generation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
41
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 34 publications
(41 citation statements)
references
References 0 publications
0
41
0
Order By: Relevance
“…Various works have studied guided generation for images [54] and language [55,56]. Several works [57,58,59,60,61,62] have explored training or fine-tuning of models for controllable text generation. Class-conditional language models can also be used to learn disciminators to guide generation [63,55,64,65].…”
Section: Conditional Language Generationmentioning
confidence: 99%
“…Various works have studied guided generation for images [54] and language [55,56]. Several works [57,58,59,60,61,62] have explored training or fine-tuning of models for controllable text generation. Class-conditional language models can also be used to learn disciminators to guide generation [63,55,64,65].…”
Section: Conditional Language Generationmentioning
confidence: 99%
“…In recent years, the development of deep learning (DL) has given rise to a series of studies on DLdriven controllable text generation (CTG), which has brought genuine breakthroughs in this field. Early approaches are based on sequential models and style embedding [Ficler and Goldberg 2017;Li et al 2016b], and achieved some promising progress. After that, there is a surge of methods based on deep generative models, such as Variational Autoencoders (VAEs) [Hu et al 2017a;Sohn et al 2015;Vechtomova et al 2018;, Generative Adversarial 111:3 Nets (GANs) [Scialom et al 2020;Wang and Wan 2018], and Energy-based Models [Bhattacharyya et al 2021;Deng et al 2020;Tu et al 2020;Zhao et al 2017].…”
Section: Ai Chatbot Story Generationmentioning
confidence: 99%
“…One line of work fine-tunes a pretrained model for a desired attribute (Ficler and Goldberg, 2017;Yu et al, 2017;Ziegler et al, 2019). The result is a class-conditional language model (CCLM).…”
Section: Related Workmentioning
confidence: 99%