2019
DOI: 10.3390/make1020043
|View full text |Cite
|
Sign up to set email alerts
|

Generalization of Parameter Selection of SVM and LS-SVM for Regression

Abstract: A Support Vector Machine (SVM) for regression is a popular machine learning model that aims to solve nonlinear function approximation problems wherein explicit model equations are difficult to formulate. The performance of an SVM depends largely on the selection of its parameters. Choosing between an SVM that solves an optimization problem with inequality constrains and one that solves the least square of errors (LS-SVM) adds to the complexity. Various methods have been proposed for tuning parameters, but no a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
6
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 13 publications
(6 citation statements)
references
References 30 publications
0
6
0
Order By: Relevance
“…The correct value of γ parameter in RBF kernel can avoid under-fitting and over-fitting phenomena in prediction [39]. The ε influences the bias significantly, and its optimal value depends on the type of noise present in the dataset [40,41]. The C parameter affects the number of support vectors, and the proper value of C can minimize the over-fitting problem [42].…”
Section: Support Vector Machinesmentioning
confidence: 99%
“…The correct value of γ parameter in RBF kernel can avoid under-fitting and over-fitting phenomena in prediction [39]. The ε influences the bias significantly, and its optimal value depends on the type of noise present in the dataset [40,41]. The C parameter affects the number of support vectors, and the proper value of C can minimize the over-fitting problem [42].…”
Section: Support Vector Machinesmentioning
confidence: 99%
“…A high value of C results in cases outside the tolerance margin being heavily penalized, thereby reducing training bias but increasing prediction variance and potentially leading to overfitting. In contrast, low values of C may increase training bias (Zeng et al, 2019).…”
Section: Support Vector Machines (Svm)mentioning
confidence: 98%
“…To prevent overfitting, SVM adjusts the classification decision function based on the principle of structural risk minimization, rather than simply minimizing the misclassification error on the training set (Chen et al, 2007). The optimal parameters for the SVM regression model were estimated as per the method outlined by Zeng et al (2019).…”
Section: Support Vector Machines (Svm)mentioning
confidence: 99%
“…During the training phase, SVM aims to minimize the classification error while maximizing the margin. It does this by formulating a cost function that penalizes misclassification and accounts for the margin width (Zeng et al, 2019). The optimization process involves finding the Lagrange multipliers associated with the support vectors to determine the hyperplane coefficients.…”
Section: Svmmentioning
confidence: 99%