2022
DOI: 10.1007/978-3-031-21743-2_42
|View full text |Cite
|
Sign up to set email alerts
|

A Survey of Abstractive Text Summarization Utilising Pretrained Language Models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
3
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
2
2

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(6 citation statements)
references
References 18 publications
0
3
0
Order By: Relevance
“…Addressing these challenges necessitates a more balanced approach to learning. One potential avenue is the incorporation of regularization techniques [72]. Regularization, in essence, adds a penalty to the loss function, discouraging the model from fitting too closely to every data point and, in turn, mitigating overfitting.…”
Section: ) Explore Noisementioning
confidence: 99%
“…Addressing these challenges necessitates a more balanced approach to learning. One potential avenue is the incorporation of regularization techniques [72]. Regularization, in essence, adds a penalty to the loss function, discouraging the model from fitting too closely to every data point and, in turn, mitigating overfitting.…”
Section: ) Explore Noisementioning
confidence: 99%
“… The data is particularly useful for training deep transfer learning models for title generation, abstractive summarization [ [1] , [5] ], and sentiment analysis of airline reviews. The title generation data is applicable for domain adaptive training [ [2] , [3] ] of pretrained language models to improve performance on the target language generation tasks for the airline reviews domain.…”
Section: Value Of the Datamentioning
confidence: 99%
“…The data is particularly useful for training deep transfer learning models for title generation, abstractive summarization [ [1] , [5] ], and sentiment analysis of airline reviews.…”
Section: Value Of the Datamentioning
confidence: 99%
See 1 more Smart Citation
“…Moreover, open-source models offer high adaptability and can be readily fine-tuned on domain-specific datasets with minimal effort. Furthermore, previous studies have demonstrated that both BART and T5 can generate summaries of comparable quality to those produced by smaller GPT-3 models [9][10][11].…”
Section: Introductionmentioning
confidence: 99%