2017
DOI: 10.1016/j.asoc.2017.07.017
|View full text |Cite
|
Sign up to set email alerts
|

A hyperparameters selection technique for support vector regression models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
25
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 51 publications
(27 citation statements)
references
References 18 publications
0
25
0
Order By: Relevance
“…C is the regularization parameter and it controls a tradeoff between the allowed error deviations and the complexity of the decision function, ε is related to the error penalty of the loss function and γ represents the width of the RBF kernel. Several optimization strategies have been suggested [41] and many involve hyper-parameter fine-tuning using computationally exhaustive grid-search and cross-validation approaches. However, the grid-search technique despite its high accuracy results, take a prohibitive computation time to achieve reliable results specially for schemes where re-configurations of the SVM model are required due to changes in the input calibration sets.…”
Section: Methodsmentioning
confidence: 99%
“…C is the regularization parameter and it controls a tradeoff between the allowed error deviations and the complexity of the decision function, ε is related to the error penalty of the loss function and γ represents the width of the RBF kernel. Several optimization strategies have been suggested [41] and many involve hyper-parameter fine-tuning using computationally exhaustive grid-search and cross-validation approaches. However, the grid-search technique despite its high accuracy results, take a prohibitive computation time to achieve reliable results specially for schemes where re-configurations of the SVM model are required due to changes in the input calibration sets.…”
Section: Methodsmentioning
confidence: 99%
“…These parameters are called hyperparameters, such as penalized parameter, C ; ε‐insensitive loss function, ε ; and the kernel parameter. The SVR performance is very sensitive to the selection of these hyperparameters, and there is no mathematical‐based procedure for deriving the exact desired values 59 . As a result, the selection of those hyperparameters is a crucial part of the research on SVR.…”
Section: The Proposed Algorithmmentioning
confidence: 99%
“…It probably was caused by a relatively poor accuracy of the olfactory measured results. Because the SVR algorithm is very sensitive to the noise in the training data [35]. Therefore, the noise (arising from the error of olfactory evaluation) in the training samples can easily affect the fitting effect of the SVR model.…”
Section: Odor Intensity Predictive Performance Of the Svr Modelmentioning
confidence: 99%