Findings of the Association for Computational Linguistics: EMNLP 2020 2020
DOI: 10.18653/v1/2020.findings-emnlp.339
|View full text |Cite
|
Sign up to set email alerts
|

Controllable Text Generation with Focused Variation

Abstract: This work introduces Focused-Variation Network (FVN), a novel model to control language generation. The main problems in previous controlled language generation models range from the difficulty of generating text according to the given attributes, to the lack of diversity of the generated texts. FVN addresses these issues by learning disjoint discrete latent spaces for each attribute inside codebooks, which allows for both controllability and diversity, while at the same time generating fluent text. We evaluat… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
33
0
1

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 25 publications
(34 citation statements)
references
References 28 publications
0
33
0
1
Order By: Relevance
“…Hence, M (X) and M (Ŷ ) are content word sets, extracted based on the POS tags using NLTK. One can employ this prior knowledge to perform different operations for words: (i) for content words, we should encourage partial matching between the sentences by L SEM ; and (ii) for style words, we should discourage matching (Hu et al, 2017). More details are provided in Appendix A.1.…”
Section: Semantic Partial Matching For Text Generationmentioning
confidence: 99%
See 2 more Smart Citations
“…Hence, M (X) and M (Ŷ ) are content word sets, extracted based on the POS tags using NLTK. One can employ this prior knowledge to perform different operations for words: (i) for content words, we should encourage partial matching between the sentences by L SEM ; and (ii) for style words, we should discourage matching (Hu et al, 2017). More details are provided in Appendix A.1.…”
Section: Semantic Partial Matching For Text Generationmentioning
confidence: 99%
“…Experiments are implemented with Tensorflow based on texar . For a fair comparison, we use a similar model configuration to that of (Hu et al, 2017;. One-layer GRU (Cho et al, 2014) encoder and LSTM attention decoder (generator) are used.…”
Section: Unsupervised Text-style Transfermentioning
confidence: 99%
See 1 more Smart Citation
“…Recently, a few works have been proposed for disentangled representation learning. Info-GAN was proposed for generating continuous image data conditioned on image labels and disentangled text generation (Hu et al, 2017) focused on controllable generation in the semi-supervised setting. Though inspired by them, the motivation and proposed models of our work differ from these methods by a large margin.…”
Section: Related Work and Motivationsmentioning
confidence: 99%
“…Inspired by previous works on image generation and semi-supervised generation (Hu et al, 2017), we propose to incorporate the concept of mutual information in information theory for style disentanglement. Given two random variables X and Y , the mutual information I(X, Y ) measures "the amount of information" obtained about one random variable given another one 1 .…”
Section: Mutual Informationmentioning
confidence: 99%