Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP) 2014
DOI: 10.3115/v1/d14-1085
|View full text |Cite
|
Sign up to set email alerts
|

Unsupervised Sentence Enhancement for Automatic Summarization

Abstract: We present sentence enhancement as a novel technique for text-to-text generation in abstractive summarization. Compared to extraction or previous approaches to sentence fusion, sentence enhancement increases the range of possible summary sentences by allowing the combination of dependency subtrees from any sentence from the source text. Our experiments indicate that our approach yields summary sentences that are competitive with a sentence fusion baseline in terms of content quality, but better in terms of gra… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
12
0

Year Published

2015
2015
2020
2020

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 28 publications
(12 citation statements)
references
References 11 publications
0
12
0
Order By: Relevance
“…(Gerani et al, 2014) generate abstractive summaries by modifying discourse parse trees. Our work is similar in spirit to Cheung and Penn (2014), which splices and recombines dependency parse trees to produce abstractive summaries. In contrast, our work operates on semantic graphs, taking advantage of the recently developed AMR Bank.…”
Section: Related and Future Workmentioning
confidence: 99%
“…(Gerani et al, 2014) generate abstractive summaries by modifying discourse parse trees. Our work is similar in spirit to Cheung and Penn (2014), which splices and recombines dependency parse trees to produce abstractive summaries. In contrast, our work operates on semantic graphs, taking advantage of the recently developed AMR Bank.…”
Section: Related and Future Workmentioning
confidence: 99%
“…Earlier summarization work was based on extraction and compression-based approaches (Jing, 2000;Knight and Marcu, 2002;Clarke and Lapata, 2008;Filippova et al, 2015), with more focus on graph-based (Giannakopoulos, 2009;Ganesan et al, 2010) and discourse tree-based (Gerani et al, 2014) models. Recent focus has shifted towards abstractive, rewriting-based summarization based on parse trees (Cheung and Penn, 2014;Wang et al, 2016), Abstract Meaning Representations (Liu et al, 2015;Dohare and Karnick, 2017), and neural network models with pointercopy mechanism and coverage (Rush et al, 2015;Chopra et al, 2016;Nallapati et al, 2016;See et al, 2017), as well as reinforcebased metric rewards (Ranzato et al, 2015;Paulus et al, 2017). We also use reinforce-based models, but with novel reward functions and better simultaneous multi-reward optimization methods.…”
Section: Related Workmentioning
confidence: 99%
“…Automatic text summarization has been progressively improving over the time, initially more focused on extractive and compressive models (Jing and McKeown, 2000;Knight and Marcu, 2002;Clarke and Lapata, 2008;Filippova et al, 2015;Kedzie et al, 2015), and moving more towards compressive and abstractive summarization based on graphs and concept maps (Giannakopoulos, 2009;Ganesan et al, 2010;Falke and Gurevych, 2017) and discourse trees (Gerani et al, 2014), syntactic parse trees (Cheung and Penn, 2014;Wang et al, 2013), and Abstract Meaning Representations (AMR) (Liu et al, 2015;Dohare and Karnick, 2017). Recent work has also adopted machine translation inspired neural seq2seq models for abstractive summarization with advances in hierarchical, distractive, saliency, and graphattention modeling (Rush et al, 2015;Chopra et al, 2016;Nallapati et al, 2016;Chen et al, 2016;Tan et al, 2017).…”
Section: Related Workmentioning
confidence: 99%