Proceedings of the 3rd Workshop on Neural Generation and Translation 2019
DOI: 10.18653/v1/d19-5623
|View full text |Cite
|
Sign up to set email alerts
|

Paraphrasing with Large Language Models

Abstract: Recently, large language models such as GPT-2 have shown themselves to be extremely adept at text generation and have also been able to achieve high-quality results in many downstream NLP tasks such as text classification, sentiment analysis and question answering with the aid of fine-tuning. We present a useful technique for using a large language model to perform the task of paraphrasing on a variety of texts and subjects. Our approach is demonstrated to be capable of generating paraphrases not only at a sen… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
33
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
5

Relationship

0
10

Authors

Journals

citations
Cited by 55 publications
(41 citation statements)
references
References 16 publications
0
33
0
Order By: Relevance
“…The model scored an F1-score of 76.2% with recall rate of 72.02% and precision 80.9%. For French tweets we used CamemBERT (Martin et al, 2019) from huggingface. The CamemBERT scored an F1-score of 76.32%, with recall rate of 71.45% and precision 81.91%.…”
Section: Discussionmentioning
confidence: 99%
“…The model scored an F1-score of 76.2% with recall rate of 72.02% and precision 80.9%. For French tweets we used CamemBERT (Martin et al, 2019) from huggingface. The CamemBERT scored an F1-score of 76.32%, with recall rate of 71.45% and precision 81.91%.…”
Section: Discussionmentioning
confidence: 99%
“…This brings us to other areas of research that are very related to our task: automatic paraphrasing (Wieting et al, 2015;Witteveen and Andrews, 2019), as well as research in diverse beam search methods (Vijayakumar et al, 2016;Li et al, 2016) for decoding multiple natural language outputs. We are happy that this shared task can serve as a forum for studying the intersection of these problems, and it is our hope that the STAPLE task data will continue to foster research in all of these areas.…”
Section: Related Workmentioning
confidence: 99%
“…This brings us to other areas of research that are very related to our task: automatic paraphrasing (Wieting et al, 2015;Witteveen and Andrews, 2019), as well as research in diverse beam search methods for decoding multiple natural language outputs. We are happy that this shared task can serve as a forum for studying the intersection of these problems, and it is our hope that the STAPLE task data will continue to foster research in all of these areas.…”
Section: Related Workmentioning
confidence: 99%