Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) 2018
DOI: 10.18653/v1/p18-1013
|View full text |Cite
|
Sign up to set email alerts
|

A Unified Model for Extractive and Abstractive Summarization using Inconsistency Loss

Abstract: We propose a unified model combining the strength of extractive and abstractive summarization. On the one hand, a simple extractive model can obtain sentence-level attention with high ROUGE scores but less readable. On the other hand, a more complicated abstractive model can obtain word-level dynamic attention to generate a more readable paragraph. In our model, sentence-level attention is used to modulate the word-level attention such that words in less attended sentences are less likely to be generated. More… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
190
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 218 publications
(211 citation statements)
references
References 15 publications
(39 reference statements)
3
190
0
Order By: Relevance
“…Most content selection models train the selector with heuristic rules (Hsu et al, 2018;Li et al, 2018;Gehrmann et al, 2018;Yao et al, 2019;Moryossef et al, 2019), which fail to fully capture the relation between selection and generation. Mei et al (2016); ; ; Li et al (2018) "soft-select" word or sentence embeddings based on a gating function.…”
Section: Related Workmentioning
confidence: 99%
“…Most content selection models train the selector with heuristic rules (Hsu et al, 2018;Li et al, 2018;Gehrmann et al, 2018;Yao et al, 2019;Moryossef et al, 2019), which fail to fully capture the relation between selection and generation. Mei et al (2016); ; ; Li et al (2018) "soft-select" word or sentence embeddings based on a gating function.…”
Section: Related Workmentioning
confidence: 99%
“…With the proposed RL training procedure (BERT-ext + abs + RL), our model exceeds the best model of Chen and Bansal (2018). In addition, the result is better than those of all the other abstractive methods exploiting extractive approaches in them (Hsu et al, 2018;Chen and Bansal, 2018;Gehrmann et al, 2018).…”
Section: Cnn/daily Mailmentioning
confidence: 84%
“…More recently, there have been several studies that have attempted to improve the performance of the abstractive summarization by explicitly combining them with extractive models. Some notable examples include the use of inconsistency loss (Hsu et al, 2018), key phrase extraction (Li et al, 2018;Gehrmann et al, 2018), and sentence extraction with rewriting (Chen and Bansal, 2018). Our model improves Sentence Rewriting with BERT as an extractor and summary-level rewards to optimize the extractor.…”
Section: Duc-2002mentioning
confidence: 99%
“…We adopt the widely used ROUGE [27] by pyrouge 5 for evaluation metric. It measures the similarity of the output Methods R-1 R-2 R-L Extractive Lead-3 [5] 40.34 17.70 36.57 RuNNer [14] 39.60 16.20 35.30 Refresh [17] 40.00 18.20 36.60 RNN-RL [16] 41.47 18.72 37.76 NeuSUM [18] 41.59 19.01 37.98 Abstractive Coverage [5] 39.53 17.28 36.38 Intra-attn [26] 39.87 15.82 36.90 Inconsistency [9] 40.68 17.97 37.13 Bottom-up [22] 41. 22 summary and the standard reference by computing overlapping n-gram, such as unigram, bigram and longest common subsequence (LCS).…”
Section: Resultsmentioning
confidence: 99%
“…In order to solve this problem, Point Network [21], [5] and CopyNet [4] have been proposed to allow both copying words from the original text and generating arbitrary words from a fixed vocabulary. [9] propose a unified model via inconsistency loss to combine the extractive and abstractive methods. [22] adopt bottom-up attention to alleviate the issue of Point Network tending to copy long-sequences.…”
Section: Related Workmentioning
confidence: 99%