2021
DOI: 10.1007/978-981-33-4597-3_37
|View full text |Cite
|
Sign up to set email alerts
|

A Novel BiGRUBiLSTM Model for Multilevel Sentiment Analysis Using Deep Neural Network with BiGRU-BiLSTM

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 8 publications
(2 citation statements)
references
References 19 publications
0
2
0
Order By: Relevance
“…The difference between the bidirectional method [43] [45] and the unidirectional method [46] [47] is the backwards operation of LSTM or BiGRU, information from the feature is preserved, and by combining the two hidden states, it can be executed at any point in bidirection [45]. Furthermore, when BiGRU and BiLSTM are combined, more independent and long-range characteristics are handled [48]. Dilated CNNs instead of normal convolution layer [49] perform better because of its larger receptive field (no loss of coverage), more computationally efficiency, robustness (provides wide coverage on the same computation cost) [50] and less memory consumption since it skips the pooling step [9] as well as does not degrade the output image's resolution (used dilated rather than using pooling) [50].…”
Section: Introductionmentioning
confidence: 99%
“…The difference between the bidirectional method [43] [45] and the unidirectional method [46] [47] is the backwards operation of LSTM or BiGRU, information from the feature is preserved, and by combining the two hidden states, it can be executed at any point in bidirection [45]. Furthermore, when BiGRU and BiLSTM are combined, more independent and long-range characteristics are handled [48]. Dilated CNNs instead of normal convolution layer [49] perform better because of its larger receptive field (no loss of coverage), more computationally efficiency, robustness (provides wide coverage on the same computation cost) [50] and less memory consumption since it skips the pooling step [9] as well as does not degrade the output image's resolution (used dilated rather than using pooling) [50].…”
Section: Introductionmentioning
confidence: 99%
“…In terms of accuracy and performance, deep attempting to learn procedures, on the other hand, outperform other similar techniques. In the classification of image [3][4], video [5][6], speech [7], and text [2][8][9] [10], IoT based analysis [11] [12], deep learning outperformed. Several neural network models have already been proved superior to others.…”
Section: Introductionmentioning
confidence: 99%