2023
DOI: 10.1007/s00521-023-08687-7
|View full text |Cite
|
Sign up to set email alerts
|

An abstractive text summarization technique using transformer model with self-attention mechanism

Abstract: The realm of scientific text summarization has experienced remarkable progress due to the availability of annotated brief summaries and ample data. However, the utilization of multiple input modalities, such as videos and audio, has yet to be thoroughly explored. At present, scientific multimodal-input-based text summarization systems tend to employ longer target summaries like abstracts, leading to an underwhelming performance in the task of text summarization.In this paper, we deal with a novel task of extre… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
2
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 13 publications
(5 citation statements)
references
References 107 publications
0
2
0
Order By: Relevance
“…Abstractive methods are also good at incorporating information that is not explicitly mentioned but implied or essential for understanding the context. Different studies and benchmarks reported neural abstractive summarization models' superior performance in capturing the essence of the text and producing more coherent and contextually relevant summaries ( [31], [32], [33], [34]).…”
Section: Abstractive Summarizationmentioning
confidence: 99%
“…Abstractive methods are also good at incorporating information that is not explicitly mentioned but implied or essential for understanding the context. Different studies and benchmarks reported neural abstractive summarization models' superior performance in capturing the essence of the text and producing more coherent and contextually relevant summaries ( [31], [32], [33], [34]).…”
Section: Abstractive Summarizationmentioning
confidence: 99%
“…In 2022, Sumanlata et al [2] proposed feature extraction-based extractive text summarization method using advanced optimization techniques.The proposed approach achieved better precision, recall, f-measure, and Rouge of 91.97%, 94.026%, 92.36%, and 77.5%, respectively for varying the training percentage. In 2023, Sandeep et al [3] proposed extractive text summarization using a transformer and self-attention model. They achieved 48.50% with minimized training loss.…”
Section: ➢ Hindi Text Summarizationmentioning
confidence: 99%
“…In this paper, we have proposed a fine-tuned model for abstractive news summarization on the Inshorts News English dataset. 36 The proposed model is fine-tuned on a Google-mT5 tokenizer. The fine-tuned framework employed for this endeavor comprises Transformers 4.32.1, Pytorch 2.1.0, Datasets 2.12.0, and Tokenizers 0.13.3, creating a robust and versatile environment for model development.…”
Section: Proposed Modelmentioning
confidence: 99%