Abstract:Large pre-trained transformer models using self-supervised learning have achieved state-of-the-art performances in various NLP tasks. However, for low-resource language like Nepali, pre-training of monolingual models remains a problem due to lack of training data and well-designed and balanced benchmark datasets. Furthermore, several multilingual pre-trained models such as mBERT and XLM-RoBERTa have been released, but their performance remains unknown for Nepali language. We compared Nepali monolingual pre-tra… Show more
Set email alert for when this publication receives citations?
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.