Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics 2019
DOI: 10.18653/v1/p19-1607
|View full text |Cite
|
Sign up to set email alerts
|

Negative Lexically Constrained Decoding for Paraphrase Generation

Abstract: Paraphrase generation can be regarded as monolingual translation. Unlike bilingual machine translation, paraphrase generation rewrites only a limited portion of an input sentence. Hence, previous methods based on machine translation often perform conservatively to fail to make necessary rewrites. To solve this problem, we propose a neural model for paraphrase generation that first identifies words in the source sentence that should be paraphrased. Then, these words are paraphrased by the negative lexically con… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
18
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 28 publications
(28 citation statements)
references
References 19 publications
0
18
0
Order By: Relevance
“…Chen et al (2019) addressed the same problem, but the syntax is controlled by a sentence exemplar. Kajiwara (2019) proposed a model that first identifies a set of words to be paraphrased, and then generates the output by using a pre-trained paraphrase generation model. proposed a Transformer-based model that utilizes structured semantic knowledge to improve the quality of paraphrases.…”
Section: Related Workmentioning
confidence: 99%
“…Chen et al (2019) addressed the same problem, but the syntax is controlled by a sentence exemplar. Kajiwara (2019) proposed a model that first identifies a set of words to be paraphrased, and then generates the output by using a pre-trained paraphrase generation model. proposed a Transformer-based model that utilizes structured semantic knowledge to improve the quality of paraphrases.…”
Section: Related Workmentioning
confidence: 99%
“…Seq2seq-based methods have been widely used in the task of paraphrase generation (Prakash et al, 2016;Kajiwara, 2019). Li et al (2018) further adopt reinforcement learning with policy gradient technique to generate semantically consistent paraphrases.…”
Section: Paraphrase Generationmentioning
confidence: 99%
“…Early works on paraphrase generation mainly focus on rule-based (McKeown, 1983;Meteer and Shaked, 1988), grammar-based (Narayan et al, 2016), lexicon-based (Bolshakov and Gelbukh, 2004;Kauchak and Barzilay, 2006), and statistical machine translation (SMT)-based methods (Kauchak and Barzilay, 2006;Zhao et al, 2009). Recently, with the release of large-scale paraphrase datasets, sequence-to-sequence (seq2seq) models (Prakash et al, 2016;Kajiwara, 2019;Li et al, 2018;Gupta et al, 2018;Shakeri and Sethy, 2019;Yang et al, 2019) have become the dominant technique in the field of paraphrase generation.…”
Section: Introductionmentioning
confidence: 99%
“…On the other hand, as pointed in Kajiwara (2019), paraphrase generation rewrites only a limited portion of an original input and the reference often includes some words occurred in the original input, thus a sentence with higher ROUGE-ref score may have low diversity (Miao et al, 2019). Therefore, the reward based on reference do not focus on the variations between the sampled sentence and the original input.…”
Section: Rewards For Multi-objective Learningmentioning
confidence: 99%
“…In recent years, there are also growing interests in generating lexically and syntactically diverse paraphrases (Gupta et al, 2018;Xu et al, 2018b;Xu et al, 2018a;Park et al, 2019;Qian et al, 2019;Kajiwara, 2019). For Seq2Seq models, the techniques of generating diverse paraphrases mainly include two categories: i) applying decoding methods such as using beam search or multiple decoders; ii) introducing random noise as model input.…”
Section: Introductionmentioning
confidence: 99%