The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2021
DOI: 10.1162/coli_a_00417
|View full text |Cite
|
Sign up to set email alerts
|

Abstractive Text Summarization: Enhancing Sequence-to-Sequence Models Using Word Sense Disambiguation and Semantic Content Generalization

Abstract: Nowadays, most research conducted in the field of abstractive text summarization focuses on neural-based models alone, without considering their combination with knowledge-based that could further enhance their efficiency. In this direction, this work presents a novel framework that combines sequence to sequence neural-based text summarization along with structure and semantic-based methodologies. The proposed framework is capable of dealing with the problem of out-of-vocabulary or rare words, improving the pe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
11
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 23 publications
(11 citation statements)
references
References 64 publications
(120 reference statements)
0
11
0
Order By: Relevance
“…Second, combination of models also looks very promising for opinion mining as additional information on text entities have been proven to improve the quality of the summarization models [65]. One suggestion is to combine T5 and LongFormer with topicaware summarization models [66][67][68][69] that integrate topical information into sequential ones; this would be especially helpful for pools of comments with varying length.…”
Section: Future Workmentioning
confidence: 99%
“…Second, combination of models also looks very promising for opinion mining as additional information on text entities have been proven to improve the quality of the summarization models [65]. One suggestion is to combine T5 and LongFormer with topicaware summarization models [66][67][68][69] that integrate topical information into sequential ones; this would be especially helpful for pools of comments with varying length.…”
Section: Future Workmentioning
confidence: 99%
“…In an effort to provide a qualitative assessment for evaluating the factual consistency (Kryściński et al, 2019;Goodrich et al, 2019) of the predicted summaries, we extend the approach of (Kouris et al, 2021) computing the precision, recall and f β scores of factual consistency. More specifically, triplets, such as (subset, relation, Goodrich et al, 2019).…”
Section: Factual Consistencymentioning
confidence: 99%
“…Since factual consistency concerns the domain of automatic TS (Kryściński et al, 2019;Goodrich et al, 2019;Kouris et al, 2021), Tables 5 and 6 report the factual consistency (Section 6.2) of the experiments on Gigaword and CNN/DailyMail datasets, respectively. To compute the f β score, we set β = 0.264 and β = 0.158 for the Gigawornd and CNN/DailyMail datasets, respectively, as these values represent the fractions of the average length of a summary to the average length of a text in the test sets of the datasets (the usage of β coefficient is explained in detail in Section 6.2).…”
Section: Modelmentioning
confidence: 99%
“…You et al [13] presented a transformer-based encoder-decoder architecture with an encoder-integrated focus-attention mechanism and a separate saliencyselection network that regulates the information flow from encoder to decoder. To predict generalised and resilient summaries, Kouris et al [8] uses a coping and coverage strategy in encoder-decoder based models, as well as reinforcement learning.…”
Section: Related Workmentioning
confidence: 99%
“…Because they don't provide any useful information to the user inquiry, these named entities were eliminated from the input text. • Motivated by the work of Kouris et al [8], we use a word sense disambiguator to provide additional meaning to ambiguous statements. For words that are too specialised to the issue, the disambiguator provides further information to the user question.…”
Section: Technical Data Cleanermentioning
confidence: 99%