2015
DOI: 10.1609/aaai.v29i1.9554
|View full text |Cite
|
Sign up to set email alerts
|

Eigenvalues Ratio for Kernel Selection of Kernel Methods

Abstract: The selection of kernel function which determines the mapping between the input space and the feature space is of crucial importance to kernel methods. Existing kernel selection approaches commonly use some measures of generalization error, which are usually difficult to estimate and have slow convergence rates. In this paper, we propose a novel measure, called eigenvalues ratio (ER), of the tight bound of generalization error for kernel selection. ER is the ration between the sum of the main eigenvalues and t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
6

Relationship

2
4

Authors

Journals

citations
Cited by 10 publications
(2 citation statements)
references
References 26 publications
0
2
0
Order By: Relevance
“…Moreover, for different kinds of kernel functions (or the same kind but with different parameters), the discrepancies of eigenvalues of different kernels may be very large, hence the absolute value of the tail eigenvalues of kernel function can not precisely reflect the goodness of different kernels. Liu and Liao (2015) first considered the relative value of eigenvalues for kernel methods. In this paper, we consider another measure of the relative value of eigenvalues, that is, the proportion of the sum of the first t largest principal eigenvalues to that of the all eigenvalues, for kernel learning.…”
Section: Measures Of Generalization Errormentioning
confidence: 99%
“…Moreover, for different kinds of kernel functions (or the same kind but with different parameters), the discrepancies of eigenvalues of different kernels may be very large, hence the absolute value of the tail eigenvalues of kernel function can not precisely reflect the goodness of different kernels. Liu and Liao (2015) first considered the relative value of eigenvalues for kernel methods. In this paper, we consider another measure of the relative value of eigenvalues, that is, the proportion of the sum of the first t largest principal eigenvalues to that of the all eigenvalues, for kernel learning.…”
Section: Measures Of Generalization Errormentioning
confidence: 99%
“…The upper bounds are composed of the error on data and the complexity of the hypothesis space (Bartlett, Boucheron, and Lugosi 2002). Different measures of the complexity constitute different kernel selection criteria, such as Rademacher complexity (Bartlett and Mendelson 2002), local Rademacher complexity (Cortes, Kloft, and Mohri 2013), radius-margin bound (Chapelle et al 2002), maximum mean discrepancy (MMD) (Sriperumbudur et al 2009;Gretton et al 2012a;2012b;Song et al 2012), effective dimensionality (Zhang 2005;Bach 2013), eigenvalues ratio (Liu and Liao 2015), and the covering number (Ding and Liao 2014b). The second category is to maximize the similarity between the kernel matrix and the label matrix.…”
Section: Introductionmentioning
confidence: 99%