2021
DOI: 10.1007/978-981-16-3690-5_135
|View full text |Cite
|
Sign up to set email alerts
|

A Performance Comparison of Optimization Algorithms on a Generated Dataset

Abstract: Optimization Algorithms are one of the important machine learning techniques, they play a vital role in improving the performance of the model. In supervised learning, data samples are given, to train a model whose outcomes are already known. The dataset of 5000 samples with 2 features and 4 classes have been generated. We've trained the dataset with different Optimization algorithms such as Gradient descent, Mini Batch gradient descent, Momentum, NAG, RMSprop, Adagrad and Adam.To check the performance, these … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 42 publications
(5 citation statements)
references
References 10 publications
(10 reference statements)
0
3
0
Order By: Relevance
“…We have taught the model in the rst 100 rounds of training. Our model's performance improved with every 20th epoch, resulting in higher accuracies according to our optimized methods[13,15].…”
mentioning
confidence: 84%
See 1 more Smart Citation
“…We have taught the model in the rst 100 rounds of training. Our model's performance improved with every 20th epoch, resulting in higher accuracies according to our optimized methods[13,15].…”
mentioning
confidence: 84%
“…Dharma. et al [13] have introduced various optimization algorithms like gradient descent, mini-batch gradient descent, momentum, NAG, RMS prop, Adagrad, and Adam. Abdu-rakhmon Sadiev et al [14] introduced federated learning (FL) as a framework for distributed learning and optimization.…”
Section: Objectivesmentioning
confidence: 99%
“…After calculating the gradient, the squared gradient is accumulated to apply RMSprop: [30] r ← ρr + (1 − ρ)g Ⓢ g (6) where the rate of deterioration is. The following is how the parameter update is calculated and used: (7) θ ← θ + ∆θ (8)…”
Section: E Rmsprop Optimizationmentioning
confidence: 99%
“…Therefore, this study aims to compare the performance of RMSProp and Adam optimization in neural machine translation from Minangkabau to Indonesian. In this study, we will use existing translation data to train a neural machine translation model and then compare the model's performance using RMSProp and Adam optimization [7]. We will analyze the results of both optimization techniques and evaluate translation performance based on standard evaluation metrics [8].…”
Section: Introductionmentioning
confidence: 99%
“…Other related works include Ansari et al (2021), Ting et al (2022), Dabbu et al (2022), Gaddam et al (2021a), Gaddam et al (2021b), Zhu et al (2021), Sethi et al (2020), Kapil et al (2016), Karuppusamy et al (2021), Gautam et al (2019).…”
Section: Related Workmentioning
confidence: 99%