2022
DOI: 10.7717/peerj-cs.1005
|View full text |Cite
|
Sign up to set email alerts
|

Research on sentiment classification for netizens based on the BERT-BiLSTM-TextCNN model

Abstract: Sentiment analysis of netizens’ comments can accurately grasp the psychology of netizens and reduce the risks brought by online public opinion. However, there is currently no effective method to solve the problems of short text, open word range, and sometimes reversed word order in comments. To better solve the above problems, this article proposes a hybrid model of sentiment classification, which is based on bidirectional encoder representations from transformers (BERT), bidirectional long short-term memory (… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 26 publications
(13 citation statements)
references
References 15 publications
0
7
0
Order By: Relevance
“…In this work, three PLMs have been tested, including ProteinBERT (34), ProtTrans (37) and ESM2 (Table 1). Additionally, three classification architectures have been employed, namely MLP, LA (45) and biLSTM_TextCNN (46). A total of 13 models were developed and trained (Methods).…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…In this work, three PLMs have been tested, including ProteinBERT (34), ProtTrans (37) and ESM2 (Table 1). Additionally, three classification architectures have been employed, namely MLP, LA (45) and biLSTM_TextCNN (46). A total of 13 models were developed and trained (Methods).…”
Section: Resultsmentioning
confidence: 99%
“…The second is the LA architecture, as described in a previous study (45), which shows excellent performance in protein localization classification. Thirdly, the biLSTM_TextCNN architecture, known for its effectiveness in sentiment classification (46), utilizes a bidirectional LSTM (biLSTM) to convert the information from the encoder into corresponding matrices. Following the classification process, a Text-attentional CNN (TextCNN) is applied, featuring three parallel one-dimensional convolution (Conv-1D) layers for crucial feature extraction.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…In order to validate the proposed model, some baselines are as follows. BERT-BiLSTM-CNN [ 27 ]: It combines the advantages of BERT embedding, BiLSTM, and TextCNN to capture local correlation and retain context information. BERT-BiGRU-CNN: This baseline replaces BiLSTM in BERT-BiLSTM-CNN with BiGRU, which has the simpler structure and calculation than BiLSTM.…”
Section: Experimental Results and Analysismentioning
confidence: 99%
“…In sentiment classification, the BiLSTM, BiGRU, and CNN model are integrated and proposed for sentiment classification [ 5 ]. A hybrid model of sentiment classification is proposed, which is based on BERT, BiLSTM, and a text convolution neural network [ 27 ]. In the legal area, a shallow network with one BiLSTM layer and one attention layer is used to perform Portuguese legal text classification [ 28 ].…”
Section: Related Workmentioning
confidence: 99%