2021
DOI: 10.48550/arxiv.2107.13290
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Arabic aspect based sentiment classification using BERT

Abstract: Aspect-based sentiment analysis(ABSA) is a textual analysis methodology that defines the polarity of opinions on certain aspects related to specific targets. The majority of research on ABSA is in English, with a small amount of work available in Arabic. Most previous Arabic research has relied on deep learning models that depend primarily on context-independent word embeddings (e.g.word2vec), where each word has a fixed representation independent of its context. This article explores the modeling capabilities… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(10 citation statements)
references
References 0 publications
0
10
0
Order By: Relevance
“…The BERT model was used by many researchers to classify the sentiments. Abdelgwad (2021) used the BERT model for sentiment classification on the hotel-review dataset, fitted the model on 10 epochs, and achieved an 89.50% accuracy by employing 10% dropout layers and 24 batch sizes. Singh, Jakhar & Pandey (2021) used the BERT model for emotion classification on Twitter data and attained 93.80% accuracy.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…The BERT model was used by many researchers to classify the sentiments. Abdelgwad (2021) used the BERT model for sentiment classification on the hotel-review dataset, fitted the model on 10 epochs, and achieved an 89.50% accuracy by employing 10% dropout layers and 24 batch sizes. Singh, Jakhar & Pandey (2021) used the BERT model for emotion classification on Twitter data and attained 93.80% accuracy.…”
Section: Resultsmentioning
confidence: 99%
“… Abdelgwad (2021) used bidirectional encoder representations from transformers (BERT) model for sentiment classification of Arabic text. They used three datasets: Arabic news, hotel reviews, and human-annotated book reviews for the experiments.…”
Section: Related Workmentioning
confidence: 99%
“…To validate the effectiveness of multi-task model, we compared the best multi-task model (AR-LCF-ATEPC-Fusion) with state-of-the-art Deep-based and transformer-based approaches that used the same benchmark dataset: RNN-BiLSTM-CRF [69], BiGRU [70], attention mechanism with neural network [71], BERT [72], and Bert-Flair-BiLSTM/BiGRU-CRF [75], Sequence to Sequence mode for preprocessing and BERT for classification (Seq-seq BERT) [76] and BERT with liner layer (Bert-linerpair) [77]. The results demonstrated that LCF-ATEPC model outperformed other comparable models.…”
Section: Performance Of Proposed Multi-task Model On the Original Dat...mentioning
confidence: 99%
“…Position data added by position embeddings is not included in those token embeddings. Finally, it is not clear if each token is associated with sentence A or B; however, this may be determined by creating a new fixed token called as a part installing [32].First of all, we pack the input features N 0 = {T1, ...., Te} WhereTe (e ] ,…”
Section: Word Embedding Using Modified Bertmentioning
confidence: 99%