2012
DOI: 10.1016/j.neucom.2011.07.005
|View full text |Cite
|
Sign up to set email alerts
|

Combining meta-learning and search techniques to select parameters for support vector machines

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
72
0
2

Year Published

2012
2012
2020
2020

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 110 publications
(77 citation statements)
references
References 18 publications
0
72
0
2
Order By: Relevance
“…According to some studies, the hyper-parameters of SVMs should always be tuned when looking for the best predictive performance [14], [25], [27]. Conversely, other studies reported experimental results where the default values provided pre dictive performances similar to those obtained by optimized hyper-parameters [3].…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…According to some studies, the hyper-parameters of SVMs should always be tuned when looking for the best predictive performance [14], [25], [27]. Conversely, other studies reported experimental results where the default values provided pre dictive performances similar to those obtained by optimized hyper-parameters [3].…”
Section: Resultsmentioning
confidence: 99%
“…Other studies combined MTL with optimization techniques for the selection of hyper-parameter values [25]- [28]. In these studies, MTL recommends hyper-parameter values for the initial population of a search technique, leading optimization methods to a faster convergence.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…In this paper, we introduced the kernel SVMs (KSVMs), which extends original linear SVMs to nonlinear SVM classifiers by applying the kernel function to replace the dot product form in the original SVMs [20]. The KSVMs allow us to fit the maximum-margin hyperplane in a transformed feature space.…”
Section: Introductionmentioning
confidence: 99%
“…The most commonly used variants are the maximum margin L 1 norm SVM [1], and the least squares SVM (LSSVM) [2], both of which require the solution of a quadratic programming problem. In the last few years, SVMs have been applied to a number of applications to obtain cutting edge performance; novel uses have also been devised, where their utility has been amply demonstrated [3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24]. SVMs were motivated by the celebrated work of Vapnik and his colleagues on generalization, and the complexity of learning.…”
Section: Introductionmentioning
confidence: 99%