Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing 2021
DOI: 10.18653/v1/2021.emnlp-main.414
|View full text |Cite
|
Sign up to set email alerts
|

Paraphrase Generation: A Survey of the State of the Art

Abstract: This paper focuses on paraphrase generation, which is a widely studied natural language generation task in NLP. With the development of neural models, paraphrase generation research has exhibited a gradual shift to neural methods in the recent years. This has provided architectures for contextualized representation of an input text and generating fluent, diverse and human-like paraphrases. This paper surveys various approaches to paraphrase generation with a main focus on neural methods.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
19
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 35 publications
(32 citation statements)
references
References 39 publications
0
19
0
Order By: Relevance
“…Both methods produce claims which have high overlap with the reference claims, though claims generated directly using BART are significantly closer to the reference claims than those generated using CLAIMGEN-ENTITY. Finally, we note the these scores are in the range of state-of-the-art models used for paraphrase generation, establishing a solid baseline for this task (Zhou and Bhat, 2021).…”
Section: Rq2: Claim Quality Evaluationmentioning
confidence: 79%
“…Both methods produce claims which have high overlap with the reference claims, though claims generated directly using BART are significantly closer to the reference claims than those generated using CLAIMGEN-ENTITY. Finally, we note the these scores are in the range of state-of-the-art models used for paraphrase generation, establishing a solid baseline for this task (Zhou and Bhat, 2021).…”
Section: Rq2: Claim Quality Evaluationmentioning
confidence: 79%
“…Paraphrase Generation has proven to be useful for adversarial training and data augmentation (Zhou and Bhat, 2021). Early methods adopt hand-crafted rules (McKeown, 1983), synonym substitution (Bolshakov and Gelbukh, 2004), machine translation (Quirk et al, 2004), and deep learning (Gupta et al, 2018; to improve the quality of generated sentences.…”
Section: Related Workmentioning
confidence: 99%
“…The main contribution in this paper is to build a model (INTERACTION) capable of providing multiple explanations, reflecting the diversity in natural languages. The motivation is that a natural language usually works in a way such that humans often provide more than one explanation for their actions, and hence may find systems that reply 'monosyllabically', or too briefly, potentially frustrating, or even non-informative [15], [51]. Still, our approach raises other questions, e.g., do humans have enough time to read multiple explanations?…”
Section: Diversity Of Explanationmentioning
confidence: 99%