2021
DOI: 10.1007/s12559-021-09948-0
|View full text |Cite
|
Sign up to set email alerts
|

A Convolutional Stacked Bidirectional LSTM with a Multiplicative Attention Mechanism for Aspect Category and Sentiment Detection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
8
1

Relationship

1
8

Authors

Journals

citations
Cited by 35 publications
(6 citation statements)
references
References 56 publications
0
3
0
Order By: Relevance
“…As explored in detail in the findings, the proposed model with its ensemble teacher-student framework has considerably outperformed the baseline SVM-based models ( Al-Smadi et al, 2016 ; Pontiki et al, 2016 ) by 27.9% in the micro F1. Compared to the RNN, LSTM, and CNN implementations reported in the related literature ( Tamchyna & Veselovsk’a, 2016 ; Bensoltane & Zaki, 2021 ; Kumar, Trueman & Cambria, 2021 ), our model has demonstrated a significant improvement ranging from 16% to 20% in micro F1 metric. The proposed model has as well performed better than the highest performing DL model reported in the literature (BERT embedding with BiGRU) by 2.7% in the ACD task.…”
Section: Discussionmentioning
confidence: 78%
See 1 more Smart Citation
“…As explored in detail in the findings, the proposed model with its ensemble teacher-student framework has considerably outperformed the baseline SVM-based models ( Al-Smadi et al, 2016 ; Pontiki et al, 2016 ) by 27.9% in the micro F1. Compared to the RNN, LSTM, and CNN implementations reported in the related literature ( Tamchyna & Veselovsk’a, 2016 ; Bensoltane & Zaki, 2021 ; Kumar, Trueman & Cambria, 2021 ), our model has demonstrated a significant improvement ranging from 16% to 20% in micro F1 metric. The proposed model has as well performed better than the highest performing DL model reported in the literature (BERT embedding with BiGRU) by 2.7% in the ACD task.…”
Section: Discussionmentioning
confidence: 78%
“…The BERT embeddings achieved the highest F1 of 63.6 and 65.5 when added to the GRU and BiGRU models respectively. Kumar, Trueman & Cambria (2021) have proposed a convolutional stacked bidirectional LSTM with a multiplicative attention mechanism for aspect category detection and sentiment analysis. When evaluated, the CNN and stacked BiLSTM Multi Attn model achieved the same weighted F1 (=0.52) on the SemEval 2015 dataset, however it achieved a higher weighted F1 of 0.60 on the SemEval 2016 dataset than the CNN stacked BiLSTM Attn which scored 0.58.…”
Section: Review Of the Related Literaturementioning
confidence: 99%
“…Then, models with lower AIC values are preferred due to their improved balance between fit and complexity. We compare the HDRB model regarding recall, precision, and f1-score criteria with the results of Trueman et al [32] The authors proposed the hybrid model, a convolutional stacked BiLSTM with a multiplicative/single attention mechanism. The convolutional layers capture local patterns and features from the input data, while the bidirectional LSTM allows the model to consider past and future contexts.…”
Section: ) Hdrb Model Resultsmentioning
confidence: 99%
“…This model achieved the best performance results on SemEval-2017 [13]. The study by [17] developed a convolutional stacked BiLSTM with a multiplicative attention mechanism for the purpose of detecting aspect category and sentiment polarity. The evaluation of the model was carried out as a multiclass classification.…”
Section: English Sentiment Analysismentioning
confidence: 99%