Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery &Amp; Data Mining 2020
DOI: 10.1145/3394486.3403231
|View full text |Cite
|
Sign up to set email alerts
|

Unsupervised Paraphrasing via Deep Reinforcement Learning

Abstract: Paraphrasing is expressing the meaning of an input sentence in different wording while maintaining fluency (i.e., grammatical and syntactical correctness). Most existing work on paraphrasing use supervised models that are limited to specific domains (e.g., image captions). Such models can neither be straightforwardly transferred to other domains nor generalize well, and creating labeled training data for new domains is expensive and laborious. The need for paraphrasing across different domains and the scarcity… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
21
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
3
2

Relationship

2
8

Authors

Journals

citations
Cited by 41 publications
(24 citation statements)
references
References 37 publications
0
21
0
Order By: Relevance
“…Rather than minimizing loss (the conventional approach), first utilized RL to maximize the reward given by an evaluator which outputs a real value to represent the matching degree between two sentences as paraphrases of each other. Other reward functions have been explored by researchers, including ROUGE score, perplexity score and language fluency (Siddique et al, 2020;.…”
Section: A Model-focusedmentioning
confidence: 99%
“…Rather than minimizing loss (the conventional approach), first utilized RL to maximize the reward given by an evaluator which outputs a real value to represent the matching degree between two sentences as paraphrases of each other. Other reward functions have been explored by researchers, including ROUGE score, perplexity score and language fluency (Siddique et al, 2020;.…”
Section: A Model-focusedmentioning
confidence: 99%
“…Deep neural networks have proved highly effective for many critical NLP tasks [3,10,17,35,54,55,65,70] including slot filling. We organize the related work on slot filling into three categories: (i) supervised slot filling, (ii) few-shot slot filling, and (iii) zero-shot slot filling.…”
Section: Related Workmentioning
confidence: 99%
“…The deep neural networks have proved highly effective for many critical NLP tasks [9,15,20,23,28,30,35,43] including intent detection. Supervised intent detection works [17,20,26,39,43] assume the availability of a large amount of labeled training data for all intents to learn discriminative features.…”
Section: Related Workmentioning
confidence: 99%