2020 IEEE International Conference on Consumer Electronics - Taiwan (ICCE-Taiwan) 2020
DOI: 10.1109/icce-taiwan49838.2020.9258085
|View full text |Cite
|
Sign up to set email alerts
|

AdaGrad Gradient Descent Method for AI Image Management

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(7 citation statements)
references
References 7 publications
0
7
0
Order By: Relevance
“…The choice of the model optimizer holds significant importance, as it aims to minimize the loss function and bring it closer to the global minimum. 36 Out of the several available optimizers, three model optimizers including the stochastic gradient descent (SGD) optimizer, 37 adaptive moment estimation (Adam) optimizer, 38 and adaptive gradient (AdaGrad) optimizer 39 were selected. On the other hand, the learning rate is also an important hyperparameter that directly affects the training speed and model loss.…”
Section: Optimizers and Learning Ratementioning
confidence: 99%
“…The choice of the model optimizer holds significant importance, as it aims to minimize the loss function and bring it closer to the global minimum. 36 Out of the several available optimizers, three model optimizers including the stochastic gradient descent (SGD) optimizer, 37 adaptive moment estimation (Adam) optimizer, 38 and adaptive gradient (AdaGrad) optimizer 39 were selected. On the other hand, the learning rate is also an important hyperparameter that directly affects the training speed and model loss.…”
Section: Optimizers and Learning Ratementioning
confidence: 99%
“…As a result, the learning rate often becomes infinitesimally small before convergence. It was shown that AdaGrad can have fewer generalisation errors compared to the Adam optimiser [ 66 ].…”
Section: Trainingmentioning
confidence: 99%
“…The evaluation is achieved in three main steps,  Load the data  Train both models for WN18 and FB15K  Evaluate both models. For evaluation, we used two different optimization techniques such as Adam optimizer [31] and AdaGrad optimizer [32]. These are the methods used to change the attributes such as weights and learning rate to reduce the losses of the neural network.…”
Section: Fine Tuning Of Parameters Valuesmentioning
confidence: 99%
“…Adaptive Gradient [32] Algorithm is an algorithm used for gradient-based optimization. It is used to perform smaller updates.…”
Section: Adagrad Optimizermentioning
confidence: 99%
See 1 more Smart Citation