2022
DOI: 10.1186/s40537-022-00656-6
|View full text |Cite
|
Sign up to set email alerts
|

Arabic aspect sentiment polarity classification using BERT

Abstract: Aspect-based sentiment analysis (ABSA) is a textual analysis methodology that defines the polarity of opinions on certain aspects related to specific targets. The majority of research on ABSA is in English, with a small amount of work available in Arabic. Most previous Arabic research has relied on deep learning models that depend primarily on context-independent word embeddings (e.g. word2vec), where each word has a fixed representation independent of its context. This article explores the modeling capabiliti… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
11
0
1

Year Published

2023
2023
2024
2024

Publication Types

Select...
8
2

Relationship

0
10

Authors

Journals

citations
Cited by 30 publications
(12 citation statements)
references
References 48 publications
0
11
0
1
Order By: Relevance
“…Recently, increased attention has been paid to the use of large pre-trained language models, such as BERT and its variations, as it achieves superior results for a variety of NLP tasks. [39] proposed a BERT with a simple linear classification layer to accomplish T2 only. Experiments on three Arabic datasets, hotel reviews, book reviews, and Arabic news, showed that the proposed model accuracies were 89.51%, 73.23%, and 85.73%, respectively.…”
Section: ) Deep Learning Approachesmentioning
confidence: 99%
“…Recently, increased attention has been paid to the use of large pre-trained language models, such as BERT and its variations, as it achieves superior results for a variety of NLP tasks. [39] proposed a BERT with a simple linear classification layer to accomplish T2 only. Experiments on three Arabic datasets, hotel reviews, book reviews, and Arabic news, showed that the proposed model accuracies were 89.51%, 73.23%, and 85.73%, respectively.…”
Section: ) Deep Learning Approachesmentioning
confidence: 99%
“…To validate the effectiveness of multi-task model, we compared the best multi-task model (AR-LCF-ATEPC-Fusion) with state-of-the-art Deep-based and transformer-based approaches that used the same benchmark dataset: RNN-BiLSTM-CRF [69], BiGRU [70], attention mechanism with neural network [71], BERT [72], and Bert-Flair-BiLSTM/BiGRU-CRF [75], Sequence to Sequence mode for preprocessing and BERT for classification (Seq-seq BERT) [76] and BERT with liner layer (Bert-linerpair) [77]. The results demonstrated that LCF-ATEPC model outperformed other comparable models.…”
Section: Performance Of Proposed Multi-task Model On the Original Dat...mentioning
confidence: 99%
“…Generally, autoregressive models perform better on text generation tasks, whereas autoencoder models perform better on language comprehension tasks. Abdelgwad et al [ 46 ] proposed an aspect-level sentiment analysis method based on BERT for the Arabic sentiment-polarity classification task and achieved good results. Choudrie et al [ 47 ] developed a multi-class sentiment classifier system based on RoBERTa and transfer learning, applied to the study of sentiment analysis of COVID-19.…”
Section: Related Workmentioning
confidence: 99%