2004
DOI: 10.1023/b:mach.0000015879.28004.9b
|View full text |Cite
|
Sign up to set email alerts
|

A Meta-Learning Method to Select the Kernel Width in Support Vector Regression

Abstract: Abstract. The Support Vector Machine algorithm is sensitive to the choice of parameter settings. If these are not set correctly, the algorithm may have a substandard performance. Suggesting a good setting is thus an important problem. We propose a meta-learning methodology for this purpose and exploit information about the past performance of different settings. The methodology is applied to set the width of the Gaussian kernel. We carry out an extensive empirical evaluation, including comparisons with other m… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
108
0
3

Year Published

2004
2004
2021
2021

Publication Types

Select...
6
2
1

Relationship

2
7

Authors

Journals

citations
Cited by 145 publications
(112 citation statements)
references
References 14 publications
1
108
0
3
Order By: Relevance
“…In this work, a set of meta-examples were built from 112 classification datasets. In [10], meta-learning was used to select the width parameter of the Gaussian kernel for regression problems. Although the width parameter of Gaussian kernel is continuous, the problem was treated as a classification task, with the meta-label variable assuming 11 different discrete values.…”
Section: Meta-learningmentioning
confidence: 99%
See 1 more Smart Citation
“…In this work, a set of meta-examples were built from 112 classification datasets. In [10], meta-learning was used to select the width parameter of the Gaussian kernel for regression problems. Although the width parameter of Gaussian kernel is continuous, the problem was treated as a classification task, with the meta-label variable assuming 11 different discrete values.…”
Section: Meta-learningmentioning
confidence: 99%
“…In our study we used 16 meta-attributes. The choice of meta-attributes was based on earlier meta-learning studies which work with regression problems [10] [9]:…”
Section: Meta-attributesmentioning
confidence: 99%
“…SVM hyper-parameters can be found through grid search (Soares et al 2004). Chapelle et al (2002) propose a gradient descent algorithm.…”
Section: Support Vector Machine (Svm)mentioning
confidence: 99%
“…The first contribution to this issue (Soares, Brazdil, & Kuba, 2004) is a meta-learning approach to parameter setting (Section 2.1). The work focuses on Support Vector Machines and the goal is to set the width σ of the Gaussian kernel.…”
Section: Contributions To the Special Issuementioning
confidence: 99%