2021
DOI: 10.1007/978-3-030-88113-9_50
|View full text |Cite
|
Sign up to set email alerts
|

Arabic Sentiment Analysis Using BERT Model

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 27 publications
(10 citation statements)
references
References 19 publications
0
8
0
Order By: Relevance
“…We chose one of the pre-trained models based on the dataset type to provide us with better-contextualized weights to initialize the model. Our model outperforms the state-of-the-art models AraBERT [13] and Choukhi et al [14]. While both models use BERT architecture like the proposed approach, the main difference is that Choukhi et al [14] uses BERT model medium architecture containing eight encoder layers without a text cleaning step.…”
Section: Resultsmentioning
confidence: 85%
See 1 more Smart Citation
“…We chose one of the pre-trained models based on the dataset type to provide us with better-contextualized weights to initialize the model. Our model outperforms the state-of-the-art models AraBERT [13] and Choukhi et al [14]. While both models use BERT architecture like the proposed approach, the main difference is that Choukhi et al [14] uses BERT model medium architecture containing eight encoder layers without a text cleaning step.…”
Section: Resultsmentioning
confidence: 85%
“…Several recent studies [12], [13] have trained the bidirectional encoder representations from transformers (BERT) model on Wikipedia and Oscar datasets for the Arabic language. In addition to that, several recent studies [14], [15] have fine-tuned the Arabic BERT model [13] for downstream task SA. the drawback identified from the analysis of existing literature are: i) models not tested on different datasets; ii) some models ignore the context meaning of the sentence; iii) the model using context like BERT fined-tuned using general pre-trained models that affect models performance; and iv) there is room for improvement for reported prediction accuracy.…”
Section: Introductionmentioning
confidence: 99%
“…This can be fine-tuned to capture context for various NLP tasks such as question answering, sentiment analysis, text classification, sentence embedding, interpreting ambiguity in the text etc. [ 25 , 33 , 90 , 148 ]. Earlier language-based models examine the text in either of one direction which is used for sentence generation by predicting the next word whereas the BERT model examines the text in both directions simultaneously for better language understanding.…”
Section: Datasets In Nlp and State-of-the-art Modelsmentioning
confidence: 99%
“…In the sentiment analysis domain, authors in [46] proposed an approach using the transformer-based BERT model, which integrates an Arabic BERT tokenizer instead of a basic BERT tokenizer. They tested the technique using five public datasets; the best accuracy was 0.96.…”
Section: B Arabic Text Classificationmentioning
confidence: 99%