Proceedings of the 11th International Conference on Natural Language Generation 2018
DOI: 10.18653/v1/w18-6545
|View full text |Cite
|
Sign up to set email alerts
|

Adapting Neural Single-Document Summarization Model for Abstractive Multi-Document Summarization: A Pilot Study

Abstract: Till now, neural abstractive summarization methods have achieved great success for single document summarization (SDS). However, due to the lack of large scale multi-document summaries, such methods can be hardly applied to multi-document summarization (MDS). In this paper, we investigate neural abstractive methods for MDS by adapting a state-of-the-art neural abstractive summarization model for SDS. We propose an approach to extend the neural abstractive model trained on large scale SDS data to the MDS task. … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
34
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 36 publications
(34 citation statements)
references
References 25 publications
(33 reference statements)
0
34
0
Order By: Relevance
“…The methods most related to this work are SDS adapted for MDS data. Zhang et al (2018a) adopts a hierarchical encoding framework trained on SDS data to MDS data by adding an additional document-level encoding. Baumel et al (2018) incorporates query relevance into standard sequence-to-sequence models.…”
Section: Related Workmentioning
confidence: 99%
“…The methods most related to this work are SDS adapted for MDS data. Zhang et al (2018a) adopts a hierarchical encoding framework trained on SDS data to MDS data by adding an additional document-level encoding. Baumel et al (2018) incorporates query relevance into standard sequence-to-sequence models.…”
Section: Related Workmentioning
confidence: 99%
“…We take the MDS datasets from DUC and TAC competitions which are widely used in prior studies (Kulesza and Taskar, 2012;Lebanoff et al, 2018). Following convention Cao et al, 2017;Cho et al, 2019) Lebanoff et al, 2018;Zhang et al, 2018;Cho et al, 2019), we measure ROUGE-1/2/SU4 F1 scores (Lin, 2004). The evaluation parameters are set according to Hong et al (2014) with stemming and stopwords not removed.…”
Section: Methodsmentioning
confidence: 99%
“…Classical MDS explore both extractive (Erkan and Radev, 2004;Haghighi and Vanderwende, 2009) and abstractive methods (Barzilay et al, 1999;Ganesan et al, 2010). Many neural MDS methods (Yasunaga et al, 2017;Zhang et al, 2018) are merely comparable or even worse than classical methods due to the challenges of large search space and limited training data. Unlike DPP-Caps-Comb (Cho et al, 2019) that incorporates neural measures into classical MDS as features, RL-MMR opts for the opposite by endowing SDS methods with the capability to conduct MDS, enabling the potential of further improvement with advances in SDS.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations