2020
DOI: 10.1609/aaai.v34i05.6433
|View full text |Cite
|
Sign up to set email alerts
|

Adapting Language Models for Non-Parallel Author-Stylized Rewriting

Abstract: Given the recent progress in language modeling using Transformer-based neural models and an active interest in generating stylized text, we present an approach to leverage the generalization capabilities of a language model to rewrite an input text in a target author's style. Our proposed approach adapts a pre-trained language model to generate author-stylized text by fine-tuning on the author-specific corpus using a denoising autoencoder (DAE) loss in a cascaded encoder-decoder framework. Optimizing over DAE … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
70
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
3
2

Relationship

1
7

Authors

Journals

citations
Cited by 36 publications
(70 citation statements)
references
References 18 publications
(38 reference statements)
0
70
0
Order By: Relevance
“…We believe that our work can also lead to rewriting of the input content tailored to certain characteristics, if we can design additional rewards to retain content. We have not performed a complete human evaluation due to the high-level of required exper-tise among the annotators for this task, as pointed by Syed et al (2020). Designing the feedback mechanism for such a human evaluation is nontrivial and has been left as a part of the future work along with designing rewarding schemes to capture other author-specific characteristics (e.g., syntactic choices, discourse structure).…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…We believe that our work can also lead to rewriting of the input content tailored to certain characteristics, if we can design additional rewards to retain content. We have not performed a complete human evaluation due to the high-level of required exper-tise among the annotators for this task, as pointed by Syed et al (2020). Designing the feedback mechanism for such a human evaluation is nontrivial and has been left as a part of the future work along with designing rewarding schemes to capture other author-specific characteristics (e.g., syntactic choices, discourse structure).…”
Section: Discussionmentioning
confidence: 99%
“…An author's choices of words in these categories define their lexical style. For example, Rudyard Kipling, known for classics of children's literature, had a higher tendency to use more concrete words (like, gongs, rockets, torch) unlike Abraham Lincoln, who being a political writer, used more abstract words (like freedom, patriotism) (Verma and Srinivasan, 2019;Syed et al, 2020). Since an author's style is an amalgam of preferences along these dimensions, our goal is to ensure simultaneous alignment to these multi-dimensional lexical preferences of an author.…”
Section: Author's Lexical Stylementioning
confidence: 99%
See 1 more Smart Citation
“…(3) approaches with multiple style-specific decoders (Syed et al, 2019;Chen et al, 2019). We highlight several applications including personabased dialogue generation (Li et al, 2016) and creative writing Tikhonov and Yamshchikov, 2018;Vechtomova et al, 2018).…”
Section: Tutorial Introductionmentioning
confidence: 99%
“…Prior work on controlled generation guides the output of a model using attribute classifiers (Dathathri et al, 2020) or control codes (Keskar et al, 2019), but we find that these models do not perform well on our transfer task ( §4.1.2). In contrast, models built for the transfer task are generally trained at the sentence level (Hu et al, 2017b,a;Li et al, 2018;Rao and Tetreault, 2018;Syed et al, 2019).…”
Section: Introductionmentioning
confidence: 99%