2022
DOI: 10.1016/j.ipm.2022.102913
|View full text |Cite
|
Sign up to set email alerts
|

Key phrase aware transformer for abstractive summarization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 18 publications
(6 citation statements)
references
References 45 publications
0
6
0
Order By: Relevance
“…Several works aim to reduce hallucinations or improve the factual consistency of abstractive summarizers, e.g., employing content planning [21], reinforcement learning [22], or constraining the generation [23]. Abstractive summarizers could also be guided, for instance, to work better on aggregating semantic information [8], with specific topics [24], or to represent better the keywords and relationships among the entities [25,26].…”
Section: Related Workmentioning
confidence: 99%
“…Several works aim to reduce hallucinations or improve the factual consistency of abstractive summarizers, e.g., employing content planning [21], reinforcement learning [22], or constraining the generation [23]. Abstractive summarizers could also be guided, for instance, to work better on aggregating semantic information [8], with specific topics [24], or to represent better the keywords and relationships among the entities [25,26].…”
Section: Related Workmentioning
confidence: 99%
“…These datasets only focus on the text, regard tabular data as noises, and filter them out. Previous summarization methods can be generally classified into two categories: extractive (Erkan and Radev, 2004;Mihalcea and Tarau, 2004) and abstractive (Nallapati et al, 2016;Zhang et al, 2020;Liu et al, 2022b) summarization methods. To model longer input sequences with limited GPU memory, Huang et al (2021) compare various efficient attention mechanisms for the encoder and propose an encoder-decoder attention named Hepos.…”
Section: Automatic Document Summarizationmentioning
confidence: 99%
“…As for the second question we should say that most scholars have considered these tasks independent. However, a few have argued that they are interrelated and can benefit each other [3], [4], [23]- [27]. There are a number of reasons behind this argument.…”
Section: Introductionmentioning
confidence: 99%
“…Therefore, interacting with RQE can be fruitful since it ensures the generated summaries are logically entailed by their source texts [23], [28]- [30]. Another reason is that generating concise summaries requires a greater focus on the main topics of questions, which existing QS models often fail to do [27]. Consequently, while duplicate information are repeatedly given attention, essential phrases may go unnoticed [26], [27].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation