Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence 2019
DOI: 10.24963/ijcai.2019/711
|View full text |Cite
|
Sign up to set email alerts
|

A Dual Reinforcement Learning Framework for Unsupervised Text Style Transfer

Abstract: Unsupervised text style transfer aims to transfer the underlying style of text but keep its main content unchanged without parallel data. Most existing methods typically follow two steps: first separating the content from the original style, and then fusing the content with the desired style. However, the separation in the first step is challenging because the content and style interact in subtle ways in natural language. Therefore, in this paper, we propose a dual reinforcement learning framework to directly … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
134
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 123 publications
(136 citation statements)
references
References 3 publications
2
134
0
Order By: Relevance
“…Our approach requires none of the finicky 2 modeling paradigms popular in style transfer researchno reinforcement learning (Luo et al, 2019), variational inference (He et al, 2020), or autoregressive sampling during training (Subramanian et al, 2019). Instead, we implement the first two stages of our pipeline by simply fine-tuning a pretrained GPT-2 language model .…”
Section: Training Time Test Timementioning
confidence: 99%
See 3 more Smart Citations
“…Our approach requires none of the finicky 2 modeling paradigms popular in style transfer researchno reinforcement learning (Luo et al, 2019), variational inference (He et al, 2020), or autoregressive sampling during training (Subramanian et al, 2019). Instead, we implement the first two stages of our pipeline by simply fine-tuning a pretrained GPT-2 language model .…”
Section: Training Time Test Timementioning
confidence: 99%
“…The original code of Subramanian et al (2019) has not been open-sourced. 13 Results with other metrics such as BLEU, as well as comparisons against several other baselines like ; Prabhumoye et al ( 2018); Luo et al (2019); ; Sudhakar et al (2019) Table 3: Ablation study using automatic metrics on the Formality (Form.) and Shakespeare (Shak.)…”
Section: Comparisons Against Prior Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Because of the lack of parallel datasets, most models focus on the unpaired transfer. Although plenty of sophisticated techniques are used in this task, such as adversarial learning Chen et al, 2018), latent representations (Li and Mandt, 2018;Dai et al, 2019;Liu et al, 2019), and reinforcement learning (Luo et al, 2019;Gong et al, 2019;Xu et al, 2018), there is little discussion about what is changed and what remains unchanged.…”
Section: Introductionmentioning
confidence: 99%