2019
DOI: 10.3390/sym11070942
|View full text |Cite
|
Sign up to set email alerts
|

An Enhanced Optimization Scheme Based on Gradient Descent Methods for Machine Learning

Abstract: A The learning process of machine learning consists of finding values of unknown weights in a cost function by minimizing the cost function based on learning data. However, since the cost function is not convex, it is conundrum to find the minimum value of the cost function. The existing methods used to find the minimum values usually use the first derivative of the cost function. When even the local minimum (but not a global minimum) is reached, since the first derivative of the cost function becomes zero, th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 15 publications
(9 citation statements)
references
References 16 publications
0
9
0
Order By: Relevance
“…From this figure, it can be concluded that gradient boost algorithm gives the best F1-score [27] of around 0.7 and k-neighbors classifiers gives the lowest F1-score of 0.6 among these four algorithms, in addition to this logistic regression and support vector classifier gives same F1-score of 0.66. From the simulation results, it can be concluded that in this paper there are total 4 algorithms are used LR [28], KNN [29], gradient descent [30], and the best accuracy achieved is 81.25% which has given by gradient descent classifier.…”
Section: Resultsmentioning
confidence: 99%
“…From this figure, it can be concluded that gradient boost algorithm gives the best F1-score [27] of around 0.7 and k-neighbors classifiers gives the lowest F1-score of 0.6 among these four algorithms, in addition to this logistic regression and support vector classifier gives same F1-score of 0.66. From the simulation results, it can be concluded that in this paper there are total 4 algorithms are used LR [28], KNN [29], gradient descent [30], and the best accuracy achieved is 81.25% which has given by gradient descent classifier.…”
Section: Resultsmentioning
confidence: 99%
“…Equation (11) presents the Logistic Regression probability. (11) To estimate the classes ŷi = argmax yκ∈C p(y = y κ |x = x i ) and to determine the parameters w that give the best results, the algorithm maximises the log likelihood using gradient descent [49]. The log likelihood function guarantees that the gradient descent algorithm can converge to the global minimum.…”
Section: Machine and Deep Learning Architectures 1) Logistic Regressionmentioning
confidence: 99%
“…Logistic Regression (LOGREG) is a classification algorithm successfully used, in many cases, as a baseline for the Sentiment Analysis task to predict the class in which an observation can be categorized [36,37]. The algorithm tries to minimize the error of the estimations made using the log-likelihood and to determine the parameters that produce the best estimations using gradient descent [38]. The log-likelihood functions guarantee that the gradient descent algorithm can converge to the global minimum.…”
Section: Logistic Regressionmentioning
confidence: 99%