2017 IEEE 60th International Midwest Symposium on Circuits and Systems (MWSCAS) 2017
DOI: 10.1109/mwscas.2017.8053244
|View full text |Cite
|
Sign up to set email alerts
|

Simplified gating in long short-term memory (LSTM) recurrent neural networks

Abstract: The standard LSTM recurrent neural networks while very powerful in long-range dependency sequence applications have highly complex structure and relatively large (adaptive) parameters. In this work, we present empirical comparison between the standard LSTM recurrent neural network architecture and three new parameter-reduced variants obtained by eliminating combinations of the input signal, bias, and hidden unit signals from individual gating signals. The experiments on two sequence datasets show that the thre… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
18
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 38 publications
(18 citation statements)
references
References 17 publications
(36 reference statements)
0
18
0
Order By: Relevance
“…It is the activity tag value corresponding to the sequential infrared signal of the L frame. By comparing the output value with the label value, the gradient descent [35] method is used to iteratively update the parameters in the network through the backpropagation [36] process of the network. The collected sequential infrared signals will be trained through the above method.…”
Section: Model Trainingmentioning
confidence: 99%
“…It is the activity tag value corresponding to the sequential infrared signal of the L frame. By comparing the output value with the label value, the gradient descent [35] method is used to iteratively update the parameters in the network through the backpropagation [36] process of the network. The collected sequential infrared signals will be trained through the above method.…”
Section: Model Trainingmentioning
confidence: 99%
“…Here, we explore further the first three SLIM LSTM variants, denoted as LSTM1, LSTM2, and LSTM3 as termed in [6], [7], [10], [11].…”
Section: B Slim Lstm Variants Overviewmentioning
confidence: 99%
“…This second approach tackle hate speech detection trough neural networks (Schmidt and Wiegand, 2017). In this stage, we used several neural networks architectures which works enough well with text, some of them are: Convolutional Neural Networks (CNN) (Jacovi et al, 2018), Long short term memory (LSTM) and its variations: Peephole, Bidirectional (BILSTM) and Gated Recurrent Unit (GRU) (Lu and Salem, 2017). We tested these architectures with different corpora which were slightly modified, in some cases we used a corpus with stop words, or simply a stemming or a lemmatized version of words.…”
Section: Based On Nnmentioning
confidence: 99%