Proceedings of the 55th Annual Meeting of the Association For Computational Linguistics (Volume 1: Long Papers) 2017
DOI: 10.18653/v1/p17-1100
|View full text |Cite
|
Sign up to set email alerts
|

Supervised Learning of Automatic Pyramid for Optimization-Based Multi-Document Summarization

Abstract: We present a new supervised framework that learns to estimate automatic Pyramid scores and uses them for optimizationbased extractive multi-document summarization. For learning automatic Pyramid scores, we developed a method for automatic training data generation which is based on a genetic algorithm using automatic Pyramid as the fitness function. Our experimental evaluation shows that our new framework significantly outperforms strong baselines regarding automatic Pyramid, and that there is much room for imp… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
3
1

Relationship

2
6

Authors

Journals

citations
Cited by 18 publications
(12 citation statements)
references
References 14 publications
0
12
0
Order By: Relevance
“…A summarizer intends to extract or generate a summary maximizing θ I . This fits within the general optimization framework for summarization (McDonald, 2007;Peyrard and Eckle-Kohler, 2017b;Peyrard and Gurevych, 2018) The background knowledge and the choice of semantic units are free parameters of the theory. They are design choices which can be explored empirically by subsequent works.…”
Section: Discussionmentioning
confidence: 99%
“…A summarizer intends to extract or generate a summary maximizing θ I . This fits within the general optimization framework for summarization (McDonald, 2007;Peyrard and Eckle-Kohler, 2017b;Peyrard and Gurevych, 2018) The background knowledge and the choice of semantic units are free parameters of the theory. They are design choices which can be explored empirically by subsequent works.…”
Section: Discussionmentioning
confidence: 99%
“…One important architecture is to model MDS as a budgeted maximum coverage problem, including the prior approach (Mc-Donald, 2007) and improved models (Woodsend and Lapata, 2012;Li et al, 2013;Boudin et al, 2015). There are still recent studies under traditional extractive framework (Peyrard and Eckle-Kohler, 2017;Avinesh and Meyer, 2017).…”
Section: Extractive Summarization Methodsmentioning
confidence: 99%
“…For example, the frequency computation of words or n-grams can be replaced with learned weights Li et al 2013). Additionally, structured output learning permits to score smaller units while providing supervision at the summary level (Li et al 2009;Peyrard and Eckle-Kohler 2017).…”
Section: Extractive Summarizationmentioning
confidence: 99%