2020
DOI: 10.1609/aaai.v34i05.6355
|View full text |Cite
|
Sign up to set email alerts
|

Revision in Continuous Space: Unsupervised Text Style Transfer without Adversarial Learning

Abstract: Typical methods for unsupervised text style transfer often rely on two key ingredients: 1) seeking the explicit disentanglement of the content and the attributes, and 2) troublesome adversarial learning. In this paper, we show that neither of these components is indispensable. We propose a new framework that utilizes the gradients to revise the sentence in a continuous space during inference to achieve text style transfer. Our method consists of three key components: a variational auto-encoder (VAE), some attr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 33 publications
(17 citation statements)
references
References 10 publications
(36 reference statements)
0
15
0
Order By: Relevance
“…(3) Decode the altered latent representation and get the transferred target with the auto-encoder. The most universal line to acquire the gradient is to train a classifier which outputs the probability of each class directly (Nguyen et al 2017;Wang, Hua, and Wan 2019;Liu et al 2020). The classifier is trained by minimizing:…”
Section: Gradient-guided Optimizationmentioning
confidence: 99%
See 3 more Smart Citations
“…(3) Decode the altered latent representation and get the transferred target with the auto-encoder. The most universal line to acquire the gradient is to train a classifier which outputs the probability of each class directly (Nguyen et al 2017;Wang, Hua, and Wan 2019;Liu et al 2020). The classifier is trained by minimizing:…”
Section: Gradient-guided Optimizationmentioning
confidence: 99%
“…We conduct comprehensive comparison with previous stateof-the-art models, including CrossAlign (Shen et al 2017), StyleEmb (Fu et al 2018), MultiDec (Fu et al 2018), Rule-Base (Li et al 2018), DelRetrGen (Li et al 2018), Con-tiSpace (Liu et al 2020) and GBT (Wang, Hua, and Wan 2019). We consider three variants of our model:…”
Section: Baselinesmentioning
confidence: 99%
See 2 more Smart Citations
“…Instead of disentangling content and style, other papers focus on revising an entangled representation of an input. A few previous studies utilize a pre-trained classifier and edit entangled latent variable until it contains target style using the gradientbased optimization (Wang et al, 2019;Liu et al, 2020). He et al (2020) view each domain of data as a partially observable variable, and transfer sentence using amortized variational inference.…”
Section: Related Workmentioning
confidence: 99%