Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Confere 2015
DOI: 10.3115/v1/p15-2138
|View full text |Cite
|
Sign up to set email alerts
|

Unsupervised extractive summarization via coverage maximization with syntactic and semantic concepts

Abstract: Coverage maximization with bigram concepts is a state-of-the-art approach to unsupervised extractive summarization. It has been argued that such concepts are adequate and, in contrast to more linguistic concepts such as named entities or syntactic dependencies, more robust, since they do not rely on automatic processing. In this paper, we show that while this seems to be the case for a commonly used newswire dataset, use of syntactic and semantic concepts leads to significant improvements in performance in oth… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
12
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
4
2
2

Relationship

1
7

Authors

Journals

citations
Cited by 20 publications
(12 citation statements)
references
References 8 publications
0
12
0
Order By: Relevance
“…4 The test set consists of 111 pairs. We use the same summary budget of 335 used by Schluter and Søgaard (2015).…”
Section: Rouge-n(s) :=mentioning
confidence: 99%
See 1 more Smart Citation
“…4 The test set consists of 111 pairs. We use the same summary budget of 335 used by Schluter and Søgaard (2015).…”
Section: Rouge-n(s) :=mentioning
confidence: 99%
“…3 The test set consists of 138 pairs. We adopt the same summary budget length: 805 words used by Schluter and Søgaard (2015). wiki: Wikipedia leading paragraphs-article pairs (all labeled "good article") from a comprehensive dump of English language Wikipedia articles.…”
Section: Rouge-n(s) :=mentioning
confidence: 99%
“…The same idea was followed by many researches (Woodsend and Lapata 2012; Li, Qian, and Liu 2013). Recently, (Schluter and Søgaard 2015) reported that syntactic and semantic concepts might also be helpful, and some papers such as (Cao et al 2015a) combined sentence and concept selection process. With heuristic rules, ILP can apply to compress (Gillick and Favre 2009;Berg-Kirkpatrick, Gillick, and Klein 2011) or even fuse (Bing et al 2015).…”
Section: Ilp For Summarizationmentioning
confidence: 99%
“…There were mainly two categories for automatic text summarization: extractive summarization and abstractive summarization [14,15]. The prior one aimed to identify summary sentences or summary words from a document that was a good description of the bones of the document before placing them in order [16][17][18]. In comparison, the latter one was not constrained by using the original sentences or words in the document and could generate a more abstractive summary [19][20][21].…”
Section: Automatic Text Summarizationmentioning
confidence: 99%