Our system is currently under heavy load due to increased usage. We're actively working on upgrades to improve performance. Thank you for your patience.
2020
DOI: 10.1109/access.2020.2982427
|View full text |Cite
|
Sign up to set email alerts
|

Banner: A Cost-Sensitive Contextualized Model for Bangla Named Entity Recognition

Abstract: Named Entity Recognition (NER) is a task in Natural Language Processing (NLP) that aims to classify words into a predetermined list of Named Entities (NE). Many architectures have produced good results on high resourced languages like English and Chinese. However, the NER task has not yet achieved much progress for Bangla, a low resource Language. In this paper, we perform the NER task on Bangla Language using Word2Vec and contextual Bidirectional Encoder Representations from Transformers (BERT) embeddings. We… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
9
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 20 publications
(14 citation statements)
references
References 41 publications
0
9
0
Order By: Relevance
“…Among recent studies, Karim et al used Densely Connected Network (DCN) in combination with Bidirectional Long Short Term Memory (BiLSTM). Ashrafi et al [45] implemented BERT-based deep neural architecture that uses the contextual embeddings from BERT [30] as input for multi-label classification. Table 1 illustrates a comprehensive overview of recent works on Bangla NER.…”
Section: Related Workmentioning
confidence: 99%
“…Among recent studies, Karim et al used Densely Connected Network (DCN) in combination with Bidirectional Long Short Term Memory (BiLSTM). Ashrafi et al [45] implemented BERT-based deep neural architecture that uses the contextual embeddings from BERT [30] as input for multi-label classification. Table 1 illustrates a comprehensive overview of recent works on Bangla NER.…”
Section: Related Workmentioning
confidence: 99%
“…Many works have studied different Bangla NLU tasks in isolation, e.g., sentiment classification (Das and Bandyopadhyay, 2010;Sharfuddin et al, 2018;Tripto and Ali, 2018), semantic textual similarity (Shajalal and Aono, 2018), parts-of-speech (PoS) tagging (Alam et al, 2016), named entity recognition (NER) (Ashrafi et al, 2020). However, Bangla NLU has not yet had a comprehensive, unified study.…”
Section: Banglishbertmentioning
confidence: 99%
“…Joshi et al (2020b) categorized Bangla in the language group that lacks efforts in labeled data collection and relies on self-supervised pretraining (Devlin et al, 2019;Radford et al, 2019;Liu et al, 2019) to boost the natural language understanding (NLU) task performances. To date, the Bangla language has been continuing to rely on fine-tuning multilingual pretrained language models (PLMs) (Ashrafi et al, 2020;Das et al, 2021;Islam et al, 2021). However, since multilingual PLMs cover a wide range of languages (Conneau and Lample, 2019;, they are large (have hundreds of millions of parameters) and require substantial computational resources for fine-tuning.…”
Section: Introductionmentioning
confidence: 99%
“…Furthermore, the entity type coverage was relatively small, and the effect of corpus set size on the experimental results and the effect of heterogeneous data on the experimental results were not illustrated in the experimental session. At present, entity recognition based on pretrained models and attention mechanism in the field of generic entity recognition is the mainstream [15][16][17][18][19][20], which gives important insight into the direction of entity recognition technology development in the military field.…”
Section: Related Workmentioning
confidence: 99%