2003
DOI: 10.1016/s0925-2312(02)00632-x
|View full text |Cite
|
Sign up to set email alerts
|

Determination of the spread parameter in the Gaussian kernel for classification and regression

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
161
0

Year Published

2003
2003
2021
2021

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 347 publications
(166 citation statements)
references
References 13 publications
0
161
0
Order By: Relevance
“…Originally, SVM are developed for pattern recognition problems, such as image recognition, 30 microarray gene expression classification, 11 protein folding recognition, 31 protein structural class prediction, 32 identification of protein cleavage sites, QSAR, and other pharmaceutical data analysis, 11,33 and now, with the introduction of -insensitive loss function, SVM have been extended to solve nonlinear regression estimation and timeseries prediction and excellent performances have been obtained. 29 In SVM, the basic idea is to map the data x into a higherdimensional feature space F via a nonlinear mapping Φ and then to do linear regression in this space. Therefore, regression approximation addresses the problem of estimating a function based on a given data set G ) {( …”
Section: Methodsmentioning
confidence: 99%
“…Originally, SVM are developed for pattern recognition problems, such as image recognition, 30 microarray gene expression classification, 11 protein folding recognition, 31 protein structural class prediction, 32 identification of protein cleavage sites, QSAR, and other pharmaceutical data analysis, 11,33 and now, with the introduction of -insensitive loss function, SVM have been extended to solve nonlinear regression estimation and timeseries prediction and excellent performances have been obtained. 29 In SVM, the basic idea is to map the data x into a higherdimensional feature space F via a nonlinear mapping Φ and then to do linear regression in this space. Therefore, regression approximation addresses the problem of estimating a function based on a given data set G ) {( …”
Section: Methodsmentioning
confidence: 99%
“…The LS-SVM uses a least squares loss function instead of the ε -insensitive loss function. It is known to be easier to optimize with shorter computational time [17]. In LS-SVM, the ε -tube and slack variables are replaced by error variables, which inform on the distances from each point to the regression function.…”
Section: Least Squares Support Vector Machinementioning
confidence: 99%
“…However, their work relies almost entirely on empirical evidence and qualitative remarks. Wang et al (2003) argue that the Gaussian parameter should be chosen with respect to a Fisher-discriminantbased measure. Guo et al use mutual information theory to guide parameter selection (Guo et al, 2005a) and parameter scaling (Guo et al, 2005b).…”
Section: State-of-the-artmentioning
confidence: 99%