Proceedings of the 15th Conference of the European Chapter of The Association for Computational Linguistics: Volume 2 2017
DOI: 10.18653/v1/e17-2112
|View full text |Cite
|
Sign up to set email alerts
|

Detecting (Un)Important Content for Single-Document News Summarization

Abstract: We present a robust approach for detecting intrinsic sentence importance in news, by training on two corpora of documentsummary pairs. When used for singledocument summarization, our approach, combined with the "beginning of document" heuristic, outperforms a state-ofthe-art summarizer and the beginning-ofarticle baseline in both automatic and manual evaluations. These results represent an important advance because in the absence of cross-document repetition, single document summarizers for news have not been … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 13 publications
(13 citation statements)
references
References 11 publications
0
13
0
Order By: Relevance
“…Absolute vs Relative Summary Ranking. In relative assessment of summarization, annotators are shown two or more summaries and are asked to rank them according to the dimension at question (Yang et al, 2017;Chen and Bansal, 2018;Narayan et al, 2018a;Guo et al, 2018;Krishna and Srinivasan, 2018). The relative assessment is often done using the paired comparison (Thurstone, 1994) or the best-worst scaling (Woodworth and G, 1991;Louviere et al, 2015), to improve inter-annotator agreement.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Absolute vs Relative Summary Ranking. In relative assessment of summarization, annotators are shown two or more summaries and are asked to rank them according to the dimension at question (Yang et al, 2017;Chen and Bansal, 2018;Narayan et al, 2018a;Guo et al, 2018;Krishna and Srinivasan, 2018). The relative assessment is often done using the paired comparison (Thurstone, 1994) or the best-worst scaling (Woodworth and G, 1991;Louviere et al, 2015), to improve inter-annotator agreement.…”
Section: Literature Reviewmentioning
confidence: 99%
“…As we will shortly see, the classifier is impressively accurate on instances in which the annotators agreed in their initial annotation and quite poor on the leads that required adjudication. These findings suggest that in future work in may be beneficial to develop a classifier for sentence-level prediction (Yang, Bao, & Nenkova, 2017) of content-density, which would be helpful for characterizing leads that mix informative and entertaining sentences. Another clear alternative is to develop a classifier to predict that a text is ambiguous in terms of its content-density status.…”
Section: Basic Setmentioning
confidence: 95%
“…Early work on extractive summarisation centers solely around simple to compute statistics, for instance word frequency [3] (Luhn, 1958), location in document [4] ABSTRACT (Baxendale, 1958), and TF-IDF [5] (Salton et al, 1996). Exploration of more aspects such as sentence position [6](Yang et al, 2017), sentence length [7](Radev et al, 2004), words in the title, presence of formal nouns, places or things, word recurrence [8] (Nenkova et al, 2006) a graph-based ranking model for text processing [11]( Rada Mihalcea and Paul Tarau). A stochastic graph-based method was proposed by Dragomir R. Radev [12]for the computation of relative importance of textual portions in NLP.…”
Section: Iirelated Workmentioning
confidence: 99%