2019
DOI: 10.2991/ijcis.d.190710.001
|View full text |Cite
|
Sign up to set email alerts
|

Attention Pooling-Based Bidirectional Gated Recurrent Units Model for Sentimental Classification

Abstract: Recurrent neural network (RNN) is one of the most popular architectures for addressing variable sequence text, and it shows outstanding results in many natural language processing (NLP) tasks and remarkable performance in capturing long-term dependencies. Many models have achieved excellent results based on RNN. However, most of these models overlook the locations of the keywords in a sentence and the semantic connections in different directions. As a consequence, these methods do not make full use of the avai… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6

Relationship

1
5

Authors

Journals

citations
Cited by 10 publications
(6 citation statements)
references
References 31 publications
0
6
0
Order By: Relevance
“…The self-attention mechanism performs parallel computation on text, thus reducing storage space and speeding up operation. Zhang et al [9] propose the BGRU-Att-pooling model, which applies an element-wise multiplication attention mechanism to analyze the semantic features of the text, and utilizes 2D maximum pooling to retain the most important feature. The optimized effect of reducing the complexity of the model is fulfilled by reducing redundant features.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…The self-attention mechanism performs parallel computation on text, thus reducing storage space and speeding up operation. Zhang et al [9] propose the BGRU-Att-pooling model, which applies an element-wise multiplication attention mechanism to analyze the semantic features of the text, and utilizes 2D maximum pooling to retain the most important feature. The optimized effect of reducing the complexity of the model is fulfilled by reducing redundant features.…”
Section: Related Workmentioning
confidence: 99%
“…GRU changed the calculation method of the hidden layer of RNN, and it adjusted the three gates in LSTM into reset gates and update gates, and combined candidate hidden states to control the flow of text information at different times [9], making the model easily extract information from previous texts. GRU was calculated according to the following formulae.…”
Section: Bigrumentioning
confidence: 99%
See 1 more Smart Citation
“…The process can be formalized as: behave = TransformerEncoder(ACT) (6) Attention Layer The attention mechanism [33][34][35] lets the model capture the whole traffic dynamics in the input sequence. Inspired by [36,37], we use the attention mechanism as an attention pooling mechanism, which can pay attention to the key element in the chain of criminal behavior elements and maintains the most meaningful information of the facts of the case. The formula is defined as follows:…”
Section: Behavior Chain Encodermentioning
confidence: 99%
“…In the past few years, deep learning methods based on convolutional neural networks (CNNs) have obtained significant achievements in machine vision [1,2], shape representation [3][4][5], speech recognition [6,7], natural language processing [8][9][10], etc. In particular, many advanced deep convolutional networks have been proposed to handle visual tasks.…”
Section: Introductionmentioning
confidence: 99%