Proceedings of the 2018 Conference of the North American Chapter Of the Association for Computational Linguistics: Hu 2018
DOI: 10.18653/v1/n18-2102
|View full text |Cite
|
Sign up to set email alerts
|

Multi-Reward Reinforced Summarization with Saliency and Entailment

Abstract: Abstractive text summarization is the task of compressing and rewriting a long document into a short summary while maintaining saliency, directed logical entailment, and nonredundancy. In this work, we address these three important aspects of a good summary via a reinforcement learning approach with two novel reward functions: ROUGESal and Entail, on top of a coverage-based baseline. The ROUGESal reward modifies the ROUGE metric by up-weighting the salient phrases/words detected via a keyphrase classifier. The… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
140
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 135 publications
(150 citation statements)
references
References 35 publications
(80 reference statements)
0
140
0
Order By: Relevance
“…RL has been applied to both extractive and abstractive summarisation in recent years (Ryang and Abekawa 2012;Rioux et al 2014;Gkatzia et al 2014;Henß et al 2015;Paulus et al 2017;Pasunuru and Bansal 2018;Kryscinski et al 2018). Most existing RL-based document summarisation systems either use heuristic functions (e.g., Ryang and Abekawa 2012;Rioux et al 2014), which do not rely on reference summaries, or ROUGE scores requiring reference summaries as the rewards for RL (Paulus et al 2017;Pasunuru and Bansal 2018;Kryscinski et al 2018). However, neither ROUGE nor the heuristics-based rewards can precisely reflect real users' requirements on summaries (Chaganty et al 2018); hence, using these imprecise rewards can severely mislead the RL-based summariser.…”
Section: Introductionmentioning
confidence: 99%
“…RL has been applied to both extractive and abstractive summarisation in recent years (Ryang and Abekawa 2012;Rioux et al 2014;Gkatzia et al 2014;Henß et al 2015;Paulus et al 2017;Pasunuru and Bansal 2018;Kryscinski et al 2018). Most existing RL-based document summarisation systems either use heuristic functions (e.g., Ryang and Abekawa 2012;Rioux et al 2014), which do not rely on reference summaries, or ROUGE scores requiring reference summaries as the rewards for RL (Paulus et al 2017;Pasunuru and Bansal 2018;Kryscinski et al 2018). However, neither ROUGE nor the heuristics-based rewards can precisely reflect real users' requirements on summaries (Chaganty et al 2018); hence, using these imprecise rewards can severely mislead the RL-based summariser.…”
Section: Introductionmentioning
confidence: 99%
“…RL is also gaining popularity as it can directly optimize non-differentiable metrics (Pasunuru and Bansal, 2018;Venkatraman et al, 2015;. proposed an intra-decoder model and combined RL and MLE to deal with summaries with bad qualities.…”
Section: Related Workmentioning
confidence: 99%
“…ext-oracle further shows that improved sentence selection would bring further performance gains to extractive approaches. Abstractive systems trained on these datasets often have a hard time beating the lead, let alone ext-oracle, or display a low degree of novelty in their summaries (See et al, 2017;Tan & Wan, 2017;Paulus et al, 2018;Pasunuru & Bansal, 2018;Celikyilmaz et al, 2018;Gehrmann et al, 2018). Interestingly, lead and ext-oracle perform poorly on XSum underlying the fact that it contains genuinely abstractive summaries.…”
Section: How Abstractive Is Xsum?mentioning
confidence: 99%
“…However, these datasets often favor extractive models which create a summary by identifying (and subsequently concatenating) the most important sentences in a document (Cheng & Lapata, 2016;Nallapati, Zhai, & Zhou, 2017;Narayan et al, 2018b). Abstractive approaches, despite being more faithful to the actual summarization task -professional editors employ various rewrite operations to transform article sentences into a summary including compression, aggregation, and paraphrasing (Jing, 2002) aside from writing sentences from scratch -they either lag behind extractive ones or are mostly extractive, exhibiting a small degree of abstraction (See et al, 2017;Paulus et al, 2018;Pasunuru & Bansal, 2018;Celikyilmaz et al, 2018;Gehrmann, Deng, & Rush, 2018).…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation