Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing 2018
DOI: 10.18653/v1/d18-1446
|View full text |Cite
|
Sign up to set email alerts
|

Adapting the Neural Encoder-Decoder Framework from Single to Multi-Document Summarization

Abstract: Generating a text abstract from a set of documents remains a challenging task. The neural encoder-decoder framework has recently been exploited to summarize single documents, but its success can in part be attributed to the availability of large parallel data automatically acquired from the Web. In contrast, parallel data for multi-document summarization are scarce and costly to obtain. There is a pressing need to adapt an encoder-decoder model trained on single-document summarization data to work with multipl… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
131
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
3
3

Relationship

1
9

Authors

Journals

citations
Cited by 146 publications
(132 citation statements)
references
References 47 publications
1
131
0
Order By: Relevance
“…We use DUC 2004, as results on this dataset are reported in Lebanoff et al (2018), although this dataset is not the focus of this work. For results on DUC 2004, models were trained on the CNNDM dataset, as in Lebanoff et al (2018). PG-BRNN and CopyTransformer models, which were pretrained by OpenNMT on CNNDM, were applied to DUC without additional training, analogous to PG-Original.…”
Section: Analysis and Discussionmentioning
confidence: 99%
“…We use DUC 2004, as results on this dataset are reported in Lebanoff et al (2018), although this dataset is not the focus of this work. For results on DUC 2004, models were trained on the CNNDM dataset, as in Lebanoff et al (2018). PG-BRNN and CopyTransformer models, which were pretrained by OpenNMT on CNNDM, were applied to DUC without additional training, analogous to PG-Original.…”
Section: Analysis and Discussionmentioning
confidence: 99%
“…Attempts to realize abstractive MDS under the framework have been made, e.g., generating English Wikipedia through multi-document summarization [49]. Several works apply pre-trained abstractive summarization model of single document summarization to multi-document summarization task [50][51] [52] to overcome the lack of enough train examples for MDS tasks. Unsupervised neural abstractive MDS is to leverage large nonannotated corpus [53].…”
Section: Related Workmentioning
confidence: 99%
“…Table 8 lists the various approaches developed by the different event teams and their corresponding ROUGE scores. In addition to the techniques mentioned in Section 4, we noticed that some teams applied other methods during implementation, including a rule-based classifier (RBF) or a multilayer perceptron (MLP) classifier for relevance judgment, and the pointer-generator with maximal marginal relevance (PG-MMR) developed by Lebanoff, Song, & Liu (2018) for summarization.…”
Section: Metricsmentioning
confidence: 99%