2022
DOI: 10.1109/tnnls.2021.3086029
|View full text |Cite
|
Sign up to set email alerts
|

Achieving Online Regression Performance of LSTMs With Simple RNNs

Abstract: Recurrent neural networks (RNNs) are widely used for online regression due to their ability to generalize nonlinear temporal dependencies. As an RNN model, long short-term memory networks (LSTMs) are commonly preferred in practice, as these networks are capable of learning long-term dependencies while avoiding the vanishing gradient problem. However, due to their large number of parameters, training LSTMs requires considerably longer training time compared to simple RNNs (SRNNs). In this article, we achieve th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 23 publications
0
2
0
Order By: Relevance
“…Moreover, we assume that the last layer L of this neural network is a fully connected layer. This is often the case for neural networks that solve a regression problem, e.g., Kim et al (2020), Vural, Ilhan, Yilmaz, Ergüt, andKozat (2021) and Yan (2012). A schematic overview of this last layer is in Fig.…”
Section: Neural Network For a Regression Problemmentioning
confidence: 98%
See 1 more Smart Citation
“…Moreover, we assume that the last layer L of this neural network is a fully connected layer. This is often the case for neural networks that solve a regression problem, e.g., Kim et al (2020), Vural, Ilhan, Yilmaz, Ergüt, andKozat (2021) and Yan (2012). A schematic overview of this last layer is in Fig.…”
Section: Neural Network For a Regression Problemmentioning
confidence: 98%
“…We assume that the considered neural network applies a linear activation function to the output, i.e., the output y L i of the last layer directly is the estimated label ŷi of training sample i. This linear activation function is also often applied in neural networks that solve a regression problem, e.g., Kim et al (2020), Vural et al (2021) and Yan (2012). The vector with estimated labels for the…”
Section: Neural Network For a Regression Problemmentioning
confidence: 99%