2022
DOI: 10.15439/2022f53
|View full text |Cite
|
Sign up to set email alerts
|

Deep Learning Transformer Architecture for Named Entity Recognition on Low Resourced Languages: State of the art results

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 11 publications
0
5
0
Order By: Relevance
“…These models were then evaluated against other neural networks and machine learning. Transformer architecture models' F1-score consistently surpassed neural networks' and machine learning's techniques [6], [7].…”
Section: Related Workmentioning
confidence: 99%
“…These models were then evaluated against other neural networks and machine learning. Transformer architecture models' F1-score consistently surpassed neural networks' and machine learning's techniques [6], [7].…”
Section: Related Workmentioning
confidence: 99%
“…Another large set of texts we used was gathered from multiple OPUS corpora [8]. They contain book reviews, subtitles, TED talk transcriptions, etc.…”
Section: Datasetsmentioning
confidence: 99%
“…All the experiments are done with PyTorch, using the Transformers 4 library and the models were taken from Hugging Face 5 . The transformer models are trained using the original parameters from BERT, presenting a dropout probability for the attention heads and hidden layers of 0.1, a hidden size of 768, an initializer range of 0.02, a max position embeddings of 512 and an intermediate size of 3,072.…”
Section: Parameters Settingsmentioning
confidence: 99%
“…According to different works, such as in [1], solutions to address these issues may include additional features to provide lexical, syntactic and semantic information about text, which have proven to be useful for detecting event triggers. Transformers models have been adopted for event detection due to their positive achievements in performance for solving different Natural Language Processing (NLP) tasks [4], [5]. BERT [6], which stands for Bidirectional Encoder Representations from Transformers, is pretrained to generate bidirectional representations of the words, taking into account the semantics by considering both left and right directions of the text.…”
Section: Introductionmentioning
confidence: 99%