2018
DOI: 10.48550/arxiv.1804.06437
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Delete, Retrieve, Generate: A Simple Approach to Sentiment and Style Transfer

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
41
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 26 publications
(41 citation statements)
references
References 10 publications
0
41
0
Order By: Relevance
“…In our work, we leverage such a lexical resource for connotations (Allaway and McKeown, 2020) to reframe arguments to be more trustworthy (e.g., less partisan, no appeal to fear fallacy), while maintaining the same denotative meaning. While retrieve-and-replace methods perform well on other attribute transfer tasks such as sentiment (Li et al, 2018a;Sudhakar et al, 2019a), our task is more dependent on broader context within a sentence even though we are performing localized replacement. Thus, there are two main challenges we need to address: 1) the lack of a parallel dataset of negatively and positively framed arguments (naturallyoccurring); and 2) a generation approach that can not only change the connotative meaning but also keep the same denotative meaning of the input argument.…”
Section: Arg4mentioning
confidence: 99%
See 2 more Smart Citations
“…In our work, we leverage such a lexical resource for connotations (Allaway and McKeown, 2020) to reframe arguments to be more trustworthy (e.g., less partisan, no appeal to fear fallacy), while maintaining the same denotative meaning. While retrieve-and-replace methods perform well on other attribute transfer tasks such as sentiment (Li et al, 2018a;Sudhakar et al, 2019a), our task is more dependent on broader context within a sentence even though we are performing localized replacement. Thus, there are two main challenges we need to address: 1) the lack of a parallel dataset of negatively and positively framed arguments (naturallyoccurring); and 2) a generation approach that can not only change the connotative meaning but also keep the same denotative meaning of the input argument.…”
Section: Arg4mentioning
confidence: 99%
“…Moreover, for lexical framing, subtle differences in word choices matter the most. By explicitly using special tokens ([SEP]) in our parallel data during fine-tuning, the BART model learns what to edit, instead of editing random words in the sentence, a common issue often found in attribute transfer models (Li et al, 2018a;Sudhakar et al, 2019a). At test time, therefore, we can ensure the model reframes a desired content span.…”
Section: Controllable Text Generationmentioning
confidence: 99%
See 1 more Smart Citation
“…We use the Yelp reviews dataset collected by Shen et al (2017) which contains 250K negative sentences and 380K positive sentences. We also use a small test set that has 1000 human-annotated parallel sentences introduced in Li et al (2018). We denote the positive sentiment as domain D 1 and the negative sentiment as domain D 2 .…”
Section: Datasets and Experiments Setupmentioning
confidence: 99%
“…Recent work has explored a variety text generation tasks that condition on a control variable to specify a desired trait of the output. Examples include summarization conditioned on a desired output length (Fan et al, 2018), paraphrase generation conditioned on a parse tree (Krishna et al, 2020), style transfer conditioned on sentiment (He et al, 2020b), and more (Hu et al, 2017;Li et al, 2018;Fu et al, 2018;He et al, 2020a). In this work, we specifically focus on text generation tasks that condition on a scalar control variable, as depicted in Figure 1.…”
Section: Introductionmentioning
confidence: 99%