2021
DOI: 10.22266/ijies2021.0228.21
|View full text |Cite
|
Sign up to set email alerts
|

Online Tuning of Hyperparameters in Deep LSTM for Time Series Applications

Abstract: Deep learning is one of the most remarkable artificial intelligence trends. It stands behind numerous recent achievements in several domains, such as speech processing, and computer vision, to mention a few. Accordingly, these achievements have sparked great attention to employing deep learning in time series modelling and forecasting. It is known that the deep learning algorithms built on neural networks contain multiple hidden layers, which make the computation of deep neural network challenging and, sometim… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
7
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 14 publications
(12 citation statements)
references
References 30 publications
0
7
0
Order By: Relevance
“…Deep learning has worked effectively in many areas, including computer vision, hyperspectral image processing, medical image analysis [17], and natural language processing include in tuning for hyperparameters online [18]. Compared to conventional methods such as support vector regressors and multi-layer perceptron, based on feature, deep learning has some advantages, such as working on two-dimensional data directly, less susceptibility to local optimal, and the ability to learn texture features from data [19]. The other advantages of DCNN are transferability connections and sparse connections.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Deep learning has worked effectively in many areas, including computer vision, hyperspectral image processing, medical image analysis [17], and natural language processing include in tuning for hyperparameters online [18]. Compared to conventional methods such as support vector regressors and multi-layer perceptron, based on feature, deep learning has some advantages, such as working on two-dimensional data directly, less susceptibility to local optimal, and the ability to learn texture features from data [19]. The other advantages of DCNN are transferability connections and sparse connections.…”
Section: Related Workmentioning
confidence: 99%
“…One particular model of a deep learning is convolutional neural network or also called deep convolutional neural network (DCNN). Since 2012, DCNN [17][18][19][20] has been led to a series of breakthroughs for image classification [22]. Deep learning-based computer-aided diagnosis for breast cancer [23] and lung cancer [24] has been applied in radiology.…”
Section: Related Workmentioning
confidence: 99%
“…The results on classification benchmarks datasets revealed that the proposed approach can effectively find the optimal values for the CNN model with high classification accuracy. In [ 10 ], the authors used the genetic algorithm to tune the parameters of deep long short-term memory (LSTM). The experimental results showed that using a dynamic tuning approach performs better than static tuning of LSTM.…”
Section: Introductionmentioning
confidence: 99%
“…Artificial neural networks, as a part of artificial intelligence methods have been widely used in many fields for prediction purposes( Bakhashwain & Sagheer, 2021 ; Rahman et al, 2021 ; Zhao & Liu, 2021 ), including wind speed prediction. One of the crucial factor for designing a neural network is its structure or topology, namely determining the hidden layers number and the hidden neurons number for corresponding hidden layer because it is closely related to the topological performance ( Aggarwal, 2018 ; Koutsoukas et al, 2017 ; Nitta, 2017 ), but until now topology determination is still a complex and difficult problem ( Lee et al, 2018 ; Naitzat, Zhitnikov & Lim, 2020 ; Rahman et al, 2021 ).…”
Section: Introductionmentioning
confidence: 99%
“…Determining the topology that does not match the needs caused overfitting or underfitting in neural networks. Several researchers have conducted research to determine the neural network topology in various ways: methods based solely on the number of input and output attributes ( Sartori & Antsaklis, 1991 ; Tamura & Tateishi, 1997 ), trial and error ( Blanchard & Samanta, 2020 ; Madhiarasan, 2020 ; Madhiarasan & Deepa, 2016 ; Madhiarasan & Deepa, 2017 ; Şen & Özcan, 2021 ) , and the rule of thumb ( Bakhashwain & Sagheer, 2021 ; Carballal et al, 2021 ; Rahman et al, 2021 ).…”
Section: Introductionmentioning
confidence: 99%