2022
DOI: 10.48550/arxiv.2203.03820
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A Variational Hierarchical Model for Neural Cross-Lingual Summarization

Abstract: The goal of the cross-lingual summarization (CLS) is to convert a document in one language (e.g., English) to a summary in another one (e.g., Chinese). Essentially, the CLS task is the combination of machine translation (MT) and monolingual summarization (MS), and thus there exists the hierarchical relationship between MT&MS and CLS. Existing studies on CLS mainly focus on utilizing pipeline methods or jointly training an end-toend model through an auxiliary MT or MS objective. However, it is very challenging … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 31 publications
0
2
0
Order By: Relevance
“…This approach enhanced the interaction between different languages, implicitly considering cross-lingual alignment, semantic similarity, and patterns between summaries in different languages, which facilitate knowledge transfer from high-resource languages to low-resource languages. Liang [21] employed a conditional variational autoencoder [22] with a shared encoder and decoder for multitask learning of machine translation (MT), MS and CLS tasks. The authors constructed two local-level latent variables for translation and summarization, respectively, and a global-level latent variable for CLS.…”
Section: Cross-lingual Summarizationmentioning
confidence: 99%
“…This approach enhanced the interaction between different languages, implicitly considering cross-lingual alignment, semantic similarity, and patterns between summaries in different languages, which facilitate knowledge transfer from high-resource languages to low-resource languages. Liang [21] employed a conditional variational autoencoder [22] with a shared encoder and decoder for multitask learning of machine translation (MT), MS and CLS tasks. The authors constructed two local-level latent variables for translation and summarization, respectively, and a global-level latent variable for CLS.…”
Section: Cross-lingual Summarizationmentioning
confidence: 99%
“…Therefore, enhancing model performance in low-resource NER tasks has recently become a research hotspot. Some researchers have adopted cross-lingual transfer methods [3], primarily transferring knowledge from resource-rich to low-resource languages. For instance, Xie et al [4] proposed a translation method based on bilingual word vectors to improve the mapping of cross-language text, replacing a Bidirectional Long Short-Term Memory (BiL-STM) [5] with a Self-Attention [6] mechanism as the encoder to enhance the robustness of different language word order differences.…”
Section: Introductionmentioning
confidence: 99%