Interspeech 2022 2022
DOI: 10.21437/interspeech.2022-10177
|View full text |Cite
|
Sign up to set email alerts
|

BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 19 publications
(13 citation statements)
references
References 5 publications
0
5
0
Order By: Relevance
“…However, the authors of PhoBERT -a Vietnamese pre-trained language model, instead of BERT for better Vietnamese presentation extraction (Nguyen and Nguyen 2020), have demonstrated that using models dedicated to fixed language pairs yields superior results compared to using a single multilingual model. They have introduced the BARTpho model (Tran, Le, and Nguyen 2022) along with pre-trained weights trained on Vietnamese data.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…However, the authors of PhoBERT -a Vietnamese pre-trained language model, instead of BERT for better Vietnamese presentation extraction (Nguyen and Nguyen 2020), have demonstrated that using models dedicated to fixed language pairs yields superior results compared to using a single multilingual model. They have introduced the BARTpho model (Tran, Le, and Nguyen 2022) along with pre-trained weights trained on Vietnamese data.…”
Section: Related Workmentioning
confidence: 99%
“…BARTpho (Tran, Le, and Nguyen 2022) is a pretrained sequence-to-sequence model for Vietnamese based on the BART architecture (Lewis et al 2020) that is trained on a large corpus of Vietnamese text. BART is a sequenceto-sequence model that can generate text from text, such as summarization, translation, or text generation.…”
Section: Bahnaric-fine-tuned Bn-bartphomentioning
confidence: 99%
“…BARTPhoBEiT [44]. BARTPhoBEiT is a our previous novel integration of the BARTPho [47] and BEiT-3 [42] models, specifically tailored for the Vietnamese language. This innovative model incorporates pre-trained Sequence-to-Sequence and bidirectional encoder representations derived from Image Transformers.…”
Section: Transformer Approachmentioning
confidence: 99%
“…This strongly limits the portability to multilingual scenarios. To overcome the above-mentioned limitation, prior works have attempted to train transformer-based models trained in languages other than English such as French [23], Vietnamese [24], or Chinese [25]. Limited research efforts have been devoted to the Italian language, i.e., [9,[26][27][28].…”
Section: Italian Language Modelingmentioning
confidence: 99%