2021
DOI: 10.14569/ijacsa.2021.0121153
|View full text |Cite
|
Sign up to set email alerts
|

Transformer based Contextual Model for Sentiment Analysis of Customer Reviews: A Fine-tuned BERT

Abstract: The Bidirectional Encoder Representations from Transformers (BERT) is a state-of-the-art language model used for multiple natural language processing tasks and sequential modeling applications. The accuracy of predictions from contextbased sentiment and analysis of customer review data from various social media platforms are challenging and timeconsuming tasks due to the high volumes of unstructured data. In recent years, more research has been conducted based on the recurrent neural network algorithm, Long Sh… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
11
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 15 publications
(12 citation statements)
references
References 4 publications
(5 reference statements)
1
11
0
Order By: Relevance
“…BERT pioneered bidirectional training, considering both left and right contexts in all layers [52], [53]. In refining this approach, RoBERTa removed the next sentence prediction objective and integrated dynamic masking during training [54], [55].…”
Section: E Bert Roberta and Albertmentioning
confidence: 99%
“…BERT pioneered bidirectional training, considering both left and right contexts in all layers [52], [53]. In refining this approach, RoBERTa removed the next sentence prediction objective and integrated dynamic masking during training [54], [55].…”
Section: E Bert Roberta and Albertmentioning
confidence: 99%
“…The highest accuracy of 70% was achieved by XLNet. Durairaj and Chinnalagu [14] suggested a fine-tuned BERT model to predict customer sentiment by using customer reviews from various datasets. The proposed model's performance was compared with SVM, FastText, BiLSTM and hybrid FastText-BiLSTM models.…”
Section: Literature Reviewmentioning
confidence: 99%
“…In order to test the accuracy of our models (BioBERT and BioGPT) we decided to extend these models by adding a softmax function to determine the relevancy of the searched PubMed articles to the case description. The softmax function report kind of sentiment the searched This approach can be used in a teaching a learning clinical setting but in practice seeking evidence to prove an option PubMed article compared to the given case description [43]. Figure 5 illustrate our Softmax Sentiment Model.…”
Section: Enhancing the Bootstrapping Of Qandamentioning
confidence: 99%