Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conferen 2019
DOI: 10.18653/v1/d19-1302
|View full text |Cite
|
Sign up to set email alerts
|

NCLS: Neural Cross-Lingual Summarization

Abstract: Cross-lingual summarization (CLS) is the task to produce a summary in one particular language for a source document in a different language. Existing methods simply divide this task into two steps: summarization and translation, leading to the problem of error propagation. To handle that, we present an end-to-end CLS framework, which we refer to as Neural Cross-Lingual Summarization (NCLS), for the first time. Moreover, we propose to further improve NCLS by incorporating two related tasks, monolingual summariz… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
152
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
1
1

Relationship

1
6

Authors

Journals

citations
Cited by 87 publications
(162 citation statements)
references
References 19 publications
0
152
0
Order By: Relevance
“…Multimodal summarization has been proposed to extract the most important information from the multimedia information. The most significant difference between multimodal summarization (Mademlis et al 2016;Li et al 2017;2018b;Zhu et al 2018) and text summarization (Zhu et al 2017;Paulus, Xiong, and Socher 2018;Celikyilmaz et al 2018;Li et al 2018c;Zhu et al 2019) lies in whether the input data contains two or more modalities of data. One of the most significant advantages of the task is that it can use the rich information in multimedia data to improve the quality of the final summary.…”
Section: Related Workmentioning
confidence: 99%
“…Multimodal summarization has been proposed to extract the most important information from the multimedia information. The most significant difference between multimodal summarization (Mademlis et al 2016;Li et al 2017;2018b;Zhu et al 2018) and text summarization (Zhu et al 2017;Paulus, Xiong, and Socher 2018;Celikyilmaz et al 2018;Li et al 2018c;Zhu et al 2019) lies in whether the input data contains two or more modalities of data. One of the most significant advantages of the task is that it can use the rich information in multimedia data to improve the quality of the final summary.…”
Section: Related Workmentioning
confidence: 99%
“…Then, translateỹ tgt back to the source language using a target-to-source MT model and discard the examples with high reconstruction errors, which are measured with ROUGE (Lin, 2004) scores. The details of this step can be found in Zhu et al (2019).…”
Section: Problem Descriptionmentioning
confidence: 99%
“…The pre-training loss we use is a weighted combination of three objectives. Similarly to Zhu et al (2019), we use an XLS pretraining objective and an MT pre-training objective as described below with some simple but effective improvements. We also introduce an additional objective based on distilling knowledge from a monolingual summarization model.…”
Section: Supervised Pre-training Stagementioning
confidence: 99%
See 2 more Smart Citations