2019
DOI: 10.1109/access.2019.2952888
|View full text |Cite
|
Sign up to set email alerts
|

Aspect Based Sentiment Analysis With Feature Enhanced Attention CNN-BiLSTM

Abstract: Previous work has recognized the importance of using the attention mechanism to obtain the interaction between aspect words and contexts for sentiment analysis. However, for the most attention mechanisms, it is unrigorous to use the average vector of the aspect words to calculate the context attention. Besides, the feature extraction ability of the model is also essential for effective analysis, the combination of CNN and LSTM can enhance the feature extraction ability and semantic expression ability of the mo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
24
0
1

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 65 publications
(33 citation statements)
references
References 31 publications
0
24
0
1
Order By: Relevance
“…Based on the Keras deep learning framework, a BiLSTM-Attention model is constructed for defect texts classification. Considering the experience of model super parameter setting in [36,37], based on the grid search method [38][39][40], the parameters of the model are optimized. The model's optimal parameter settings are obtained as shown in Table 6.…”
Section: Case Study and Analysismentioning
confidence: 99%
“…Based on the Keras deep learning framework, a BiLSTM-Attention model is constructed for defect texts classification. Considering the experience of model super parameter setting in [36,37], based on the grid search method [38][39][40], the parameters of the model are optimized. The model's optimal parameter settings are obtained as shown in Table 6.…”
Section: Case Study and Analysismentioning
confidence: 99%
“…Meng et al [9] adopted a CNN to extract high-level feature representation from improved word embedding layer. Bi-LSTM is used to capture local and global semantic information after that an attention layer is employed to highlight relevant aspect term features.…”
Section: Related Workmentioning
confidence: 99%
“…Because of the efficient performance of BERT [10] in feature representation compared to conventional syntax embeddings such as Glove [12] or word2vec [9] this research adopts and fine-tunes the BERT model for the AOM task. As the first step of the FGAOM model, the language BERT model is finetuned by pre-training it using three domain-specific corpora (see, Table I).…”
Section: A Domain Adaption and Embedding Layermentioning
confidence: 99%
See 2 more Smart Citations