2021
DOI: 10.1109/access.2021.3052783
|View full text |Cite
|
Sign up to set email alerts
|

A Survey of the State-of-the-Art Models in Neural Abstractive Text Summarization

Abstract: Dealing with vast amounts of textual data requires the use of efficient systems. Automatic summarization systems are capable of addressing this issue. Therefore, it becomes highly essential to work on the design of existing automatic summarization systems and innovate them to make them capable of meeting the demands of continuously increasing data, based on user needs. This study tends to survey the scientific literature to obtain information and knowledge about the recent research in automatic text summarizat… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
23
0
1

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 55 publications
(24 citation statements)
references
References 31 publications
0
23
0
1
Order By: Relevance
“…Syed et al [17] presented a hidden layer multilayer perceptron to demonstrate character recognition. The character is identified by examining its forms and comparing its features.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Syed et al [17] presented a hidden layer multilayer perceptron to demonstrate character recognition. The character is identified by examining its forms and comparing its features.…”
Section: Literature Reviewmentioning
confidence: 99%
“…For deep learning-based abstractive summarization, the sequence-to-sequence framework has become the main architecture (El-Kassas et al, 2021;Zhang et al, 2020). Lately, transformer-based architecture dramatically improves performance and has become a standard of neural abstractive models (Syed et al, 2021;Vaswani et al, 2017). Recent studies on neural abstractive models have focused on specific forms of datasets, such as long text (Cohan et al, 2018;Sharma et al, 2019), diverse domains (Hermann et al, 2015;Kornilova & Eidelman, 2019;Koupaee & Wang, 2018) and others.…”
Section: Text Summarizationmentioning
confidence: 99%
“…The success of the deep learning-based language models greatly improved the quality of abstractive summarization [27,[27][28][29]. Recently, Transformer-based models have become state-of-theart models for abstractive text summarization [30]. For example, BERTSum [29] leverages the strong language modeling capability of a pre-trained BERT [1] model to achieve high-quality abstractive summarization through transfer learning.…”
Section: Ts Systemmentioning
confidence: 99%