2022
DOI: 10.21203/rs.3.rs-2289743/v1
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Comparative Evaluation of Transformer-Based Nepali Language Models

Abstract: Large pre-trained transformer models using self-supervised learning have achieved state-of-the-art performances in various NLP tasks. However, for low-resource language like Nepali, pre-training of monolingual models remains a problem due to lack of training data and well-designed and balanced benchmark datasets. Furthermore, several multilingual pre-trained models such as mBERT and XLM-RoBERTa have been released, but their performance remains unknown for Nepali language. We compared Nepali monolingual pre-tra… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 36 publications
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?