2023
DOI: 10.32604/iasc.2023.032255
|View full text |Cite
|
Sign up to set email alerts
|

Hyperparameter Tuning for Deep Neural Networks Based Optimization Algorithm

Abstract: For training the present Neural Network (NN) models, the standard technique is to utilize decaying Learning Rates (LR). While the majority of these techniques commence with a large LR, they will decay multiple times over time. Decaying has been proved to enhance generalization as well as optimization.Other parameters, such as the network's size, the number of hidden layers, dropouts to avoid overfitting, batch size, and so on, are solely based on heuristics. This work has proposed Adaptive Teaching Learning Ba… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
0
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 18 publications
0
0
0
Order By: Relevance
“…Absence of External Validation: Some studies relied solely on the same dataset for training and testing without utilising external datasets for model validation. 36 This lack of external validation can lead to an overestimation of the model's performance and restrict the applicability of the findings. Inadequate Comparative Analysis: While Cheng et al 97 conducted a comparison between deep learning and traditional machine learning algorithms, many publications did not evaluate their proposed approaches against established procedures or alternative algorithms.…”
Section: Results Of Individual Sources Of Evidencementioning
confidence: 99%
See 1 more Smart Citation
“…Absence of External Validation: Some studies relied solely on the same dataset for training and testing without utilising external datasets for model validation. 36 This lack of external validation can lead to an overestimation of the model's performance and restrict the applicability of the findings. Inadequate Comparative Analysis: While Cheng et al 97 conducted a comparison between deep learning and traditional machine learning algorithms, many publications did not evaluate their proposed approaches against established procedures or alternative algorithms.…”
Section: Results Of Individual Sources Of Evidencementioning
confidence: 99%
“…Improving the algorithm's performance entails meticulous adjustments through hyperparameter tuning and careful exploration of the hyperparameter space using approaches such as grid search, random search, and Bayesian optimisation. 35,36 Data collection and preparation, designing an appropriate model architecture, separating data for training and validation, optimising hyperparameters, and conducting model training and validation are all critical aspects for developing improved cataract detection models. 35,37 These elements are critical for improving the effectiveness of machine-learning models.…”
Section: Conceptsmentioning
confidence: 99%
“…Random search performs a randomized search over hyperparameters from certain distributions over possible hyperparameter values. The search process continues until the desired metric is reached or until the predetermined computational budget is exhausted [36,37].…”
Section: Hyperparameter Tuningmentioning
confidence: 99%