2019 10th International Conference on Information, Intelligence, Systems and Applications (IISA) 2019
DOI: 10.1109/iisa.2019.8900675
|View full text |Cite
|
Sign up to set email alerts
|

Hyperparameter Optimization of LSTM Network Models through Genetic Algorithm

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0
1

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 43 publications
(15 citation statements)
references
References 4 publications
0
9
0
1
Order By: Relevance
“…HPO can be seen as the final step in model design and the first step in training the neural network. Considering the effect of hyperparameters on accuracy and speed during training, the training process should be carefully experienced before starting [23]. The HPO process automatically optimizes the hyperparameters of the machine learning model to get humans out of the loop of the machine learning system.…”
Section: Hyperparameter Optimizationmentioning
confidence: 99%
“…HPO can be seen as the final step in model design and the first step in training the neural network. Considering the effect of hyperparameters on accuracy and speed during training, the training process should be carefully experienced before starting [23]. The HPO process automatically optimizes the hyperparameters of the machine learning model to get humans out of the loop of the machine learning system.…”
Section: Hyperparameter Optimizationmentioning
confidence: 99%
“…LSTMs have several parameters, such as the number of layers, the number of units in the hidden layer, time window size, batch size, etc., referred to as hyperparameters [18], which influence network behaviour [4] and thus should be optimised before the training process [18].…”
Section: Long-short Term Memory (Lstm) Networkmentioning
confidence: 99%
“…where n is the number of samples, y  is the desired output and ŷ is the predicted output value of the observation made by the model  th . The MAE (16), MAPE (17) and RMSE (18) were evaluated in accordance to the following equations [10]:…”
Section: F Performance Metricsmentioning
confidence: 99%
See 1 more Smart Citation
“…One possibility is to introduce optimization techniques like grid search (GS) , random search (RS) , genetic algorithm (GA) , simulated annealing (SA) , etc. to accelerate the training process [13][14][15][16]. The above mentioned methods can improve the modelling efficiency to some extent, however, the results obtained may fluctuate due to different kinds of initialization methods.…”
Section: Introductionmentioning
confidence: 99%