Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Confere 2015
DOI: 10.3115/v1/p15-1153
|View full text |Cite
|
Sign up to set email alerts
|

Abstractive Multi-Document Summarization via Phrase Selection and Merging

Abstract: We propose an abstraction-based multidocument summarization framework that can construct new sentences by exploring more fine-grained syntactic units than sentences, namely, noun/verb phrases. Different from existing abstraction-based approaches, our method first constructs a pool of concepts and facts represented by phrases from the input documents. Then new sentences are generated by selecting and merging informative phrases to maximize the salience of phrases and meanwhile satisfy the sentence construction … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
104
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 120 publications
(107 citation statements)
references
References 31 publications
(24 reference statements)
1
104
0
Order By: Relevance
“…The pyramid evaluation metric involves semantic matching of summary content units (SCUs) so as to recognize alternate realizations of the same meaning, which is a better metric for the abstractive summary evaluation. Since the manual pyramid evaluation is time-consuming and the evaluation results can't be reproducible with different groups of assessors, we use the automated version of pyramid proposed in (Passonneau et al, 2013) and adopt the same setting as in (Bing et al, 2015). Table 3 shows the evaluation results of our system and the three baseline systems on DUC 2007.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…The pyramid evaluation metric involves semantic matching of summary content units (SCUs) so as to recognize alternate realizations of the same meaning, which is a better metric for the abstractive summary evaluation. Since the manual pyramid evaluation is time-consuming and the evaluation results can't be reproducible with different groups of assessors, we use the automated version of pyramid proposed in (Passonneau et al, 2013) and adopt the same setting as in (Bing et al, 2015). Table 3 shows the evaluation results of our system and the three baseline systems on DUC 2007.…”
Section: Resultsmentioning
confidence: 99%
“…Moreover, texts can be generated efficiently from the BSUs network. Another recent abstractive summarization method generates new sentences by selecting and merging phrases from the input documents (Bing et al, 2015). It first extracts noun phrases and verb-object phrases from the input documents, and then calculates saliency scores for them.…”
Section: Related Workmentioning
confidence: 99%
“…For example Li (2015) and Bing et al (2015) use an earlier version of AP based on distributional semantics (Passonneau et al, 2013) to evaluate abstractive multi-document summarization.…”
Section: Related Workmentioning
confidence: 99%
“…Recently, compressive and abstractive summarization are attracting attention (e.g., Almeida and Martins (2013), Qian and Liu (2013), Yao et al (2015), Banerjee et al (2015), Bing et al (2015)). However, extractive summarization remains a primary research topic because the linguistic quality of the resultant summaries is guaranteed, at least at the sentence level, which is a key requirement for practical use (e.g., , Hong et al (2015), Yogatama et al (2015), Parveen et al (2015)).…”
Section: Introductionmentioning
confidence: 99%