2021
DOI: 10.1088/1742-6596/1743/1/012002
|View full text |Cite
|
Sign up to set email alerts
|

Comparative study of optimization techniques in deep learning: Application in the ophthalmology field

Abstract: The optimization is a discipline which is part of mathematics and which aims to model, analyse and solve analytically or numerically problems of minimization or maximization of a function on a specific dataset. Several optimization algorithms are used in systems based on deep learning (DL) such as gradient descent (GD) algorithm. Considering the importance and the efficiency of the GD algorithm, several research works made it possible to improve it and to produce several other variants which also knew great su… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0
1

Year Published

2022
2022
2024
2024

Publication Types

Select...
8
2

Relationship

0
10

Authors

Journals

citations
Cited by 29 publications
(11 citation statements)
references
References 7 publications
0
10
0
1
Order By: Relevance
“…The need to use optimization training in the neural network helps for proper generalization of an algorithm and reduces the loss function by finding the optimized value of the weights. This enables accurate prediction of the new data [54].…”
Section: Phase 3: Optimization Techniquementioning
confidence: 99%
“…The need to use optimization training in the neural network helps for proper generalization of an algorithm and reduces the loss function by finding the optimized value of the weights. This enables accurate prediction of the new data [54].…”
Section: Phase 3: Optimization Techniquementioning
confidence: 99%
“…Metode optimasi yang digunakan sangat menentukan tingkat konvergensi pemprosesan data. Beberapa optimasi yang bisa digunakan adalah AdaDelta, Nesterov, momentum, RMSProp, AdaGrad, Adam, Nadam GD algorithms dan AdaMax [12]. Metode estimasi Adam dan SGD sangat fleksibel dan memiliki tingkat konvergensi yang signifikan dan lebih cepat dibandingkan dengan metode lain, seperti yang dijelaskan Su dan Kek [13].…”
Section: Pendahuluanunclassified
“…Adagrad offered to change the learning rate at each time step regarding the importance of parameters update [5]. However, Adagrad also has several disadvantages, it's computationally expensive as it needs to calculate second-order derivative, and as the learning rate decreases, it results in slow training, but the algorithm provides an approximate minimization, the speed of its convergence, this makes it possible to use it in future work using large data sets [20]. In (2) shows the weight update rule for Adagrad.…”
Section: Adaptive Gradient (Adagrad)mentioning
confidence: 99%