2019
DOI: 10.3233/jifs-190033
|View full text |Cite
|
Sign up to set email alerts
|

Optimizing deep neural networks hyperparameter positions and values

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 14 publications
(10 citation statements)
references
References 14 publications
0
10
0
Order By: Relevance
“…This study varied hyperparameters/parameters like optimization algorithms, momentum, activation function, number of layers, dropout, learning rate, epoch, batch size, and others in a very systemic but proficient manner. We understand that there are different approaches used in literature for determining the optimal set of hyperparameters in deep learning model 23,50–53 . Approaches for finding out the best hyperparameters are grid search, 27 random search, 28 manual method genetic algorithm, Bayesian method, with new approaches evolving which rely on algorithms and optimization methods.…”
Section: Overview Of the Proposed Approachmentioning
confidence: 99%
“…This study varied hyperparameters/parameters like optimization algorithms, momentum, activation function, number of layers, dropout, learning rate, epoch, batch size, and others in a very systemic but proficient manner. We understand that there are different approaches used in literature for determining the optimal set of hyperparameters in deep learning model 23,50–53 . Approaches for finding out the best hyperparameters are grid search, 27 random search, 28 manual method genetic algorithm, Bayesian method, with new approaches evolving which rely on algorithms and optimization methods.…”
Section: Overview Of the Proposed Approachmentioning
confidence: 99%
“…Akl et al [37], in their work, studied the effect of altering a hyperparameter within the deep learning model architecture.…”
Section: Related Workmentioning
confidence: 99%
“…Hyperparameter optimization is an essential step in the implementation of any machine learning model [22]. This optimization process includes regularly modifying the model's hyperparameter values to minimize the testing error.…”
Section: Hyperparameter Tuningmentioning
confidence: 99%
“…This optimization process includes regularly modifying the model's hyperparameter values to minimize the testing error. Based on Ahmed A. Akl et al research, kernel initializer and batch size rate need to optimize for better accuracy [22]. Meanwhile, Kwon, D. H. proposed dropout rate, neuron units, and learning optimizer [23].…”
Section: Hyperparameter Tuningmentioning
confidence: 99%