2022
DOI: 10.1007/s11042-022-13820-0
|View full text |Cite
|
Sign up to set email alerts
|

The effect of choosing optimizer algorithms to improve computer vision tasks: a comparative study

Abstract: Optimization algorithms are used to improve model accuracy. The optimization process undergoes multiple cycles until convergence. A variety of optimization strategies have been developed to overcome the obstacles involved in the learning process. Some of these strategies have been considered in this study to learn more about their complexities. It is crucial to analyse and summarise optimization techniques methodically from a machine learning standpoint since this can provide direction for future work in both … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
16
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 76 publications
(34 citation statements)
references
References 75 publications
0
16
0
Order By: Relevance
“…The commonly used optimization methods include Stochastic Gradient Descent (SGD), Adaptive Gradient Algorithm (AdaGrad), Root Mean Square Propagation (RMSProp), Adaptive Moment Estimation (Adam), and Nesterov‐accelerated Adaptive Moment Estimation (NAdam). [ 26 ] In our work, NAdam was chosen as our optimization method, which proved to be very optimal and stable.…”
Section: Experiments Detailsmentioning
confidence: 99%
“…The commonly used optimization methods include Stochastic Gradient Descent (SGD), Adaptive Gradient Algorithm (AdaGrad), Root Mean Square Propagation (RMSProp), Adaptive Moment Estimation (Adam), and Nesterov‐accelerated Adaptive Moment Estimation (NAdam). [ 26 ] In our work, NAdam was chosen as our optimization method, which proved to be very optimal and stable.…”
Section: Experiments Detailsmentioning
confidence: 99%
“…Optimization algorithms play a pivotal role in improving the efficiency of intelligent systems by fine-tuning parameters, reducing computational complexity, and enhancing overall performance. Notably, research by [5] highlights the importance of improving techniques in machine learning and neural network training and accuracy, demonstrating how advancements in optimization algorithms can significantly accelerate convergence and reduce training time. Many real-world problems involve dynamic environments where the optimal solution may change over time, Metaheuristic algorithms with adaptive mechanisms can dynamically adjust their search strategies, enabling them to the scope with changing conditions [6].…”
Section: Introductionmentioning
confidence: 99%
“…Despite these promising results, it is known that VAEs suffer from sensitivity to the hyperparameters, such as the learning rate, the number of hidden layers, the optimizer, the number of neurons in each layer [17] [18] [19]. Although VAEs are getting more and more widely-used, there is a lack of guidelines for selecting training hyperparameters.…”
Section: Introductionmentioning
confidence: 99%