2020
DOI: 10.48550/arxiv.2011.04784
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

EstBERT: A Pretrained Language-Specific BERT for Estonian

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 0 publications
0
1
0
Order By: Relevance
“…Furthermore, we also considered bidirectional encoder representations from transformers (BERT), which have recently performed well in textual classification tasks (Devlin et al, 2019). Namely, we used the version pre-trained in Estonian (EstBERT) (Tanvir et al, 2021). During this phase we opted for a model with one layer and an AdamW optimizer with an initial learning rate of 2e-5, as suggested in Devlin and colleagues (2019).…”
Section: Step S2: Comparing Different Sml Models and Feature Extracti...mentioning
confidence: 99%
“…Furthermore, we also considered bidirectional encoder representations from transformers (BERT), which have recently performed well in textual classification tasks (Devlin et al, 2019). Namely, we used the version pre-trained in Estonian (EstBERT) (Tanvir et al, 2021). During this phase we opted for a model with one layer and an AdamW optimizer with an initial learning rate of 2e-5, as suggested in Devlin and colleagues (2019).…”
Section: Step S2: Comparing Different Sml Models and Feature Extracti...mentioning
confidence: 99%