2022
DOI: 10.1016/j.neucom.2022.03.034
|View full text |Cite
|
Sign up to set email alerts
|

Ideal kernel tuning: Fast and scalable selection of the radial basis kernel spread for support vector classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
0
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(8 citation statements)
references
References 15 publications
0
0
0
Order By: Relevance
“…The ideal kernel J for a classification problem [22] is a function defined as J (x, y) = 1 when x and y share the class label, and J (x, y) = 0 otherwise. Let {x n } N n=1 be the set of training patterns, c n be the class label of x n , with c n ∈ {1, .…”
Section: Methodsmentioning
confidence: 99%
See 4 more Smart Citations
“…The ideal kernel J for a classification problem [22] is a function defined as J (x, y) = 1 when x and y share the class label, and J (x, y) = 0 otherwise. Let {x n } N n=1 be the set of training patterns, c n be the class label of x n , with c n ∈ {1, .…”
Section: Methodsmentioning
confidence: 99%
“…Here, K nm (σ ) = K (x n , x m , σ ) and the sum is over m > n because K nn = 1 and K nm = K mn , being M = N (N − 1)/2 the number of terms in the sum. Our strategy in previous works [22], [23] was to calculate D(σ ) for several σ values and to select σ minimizing D(σ ). In the following, we develop a method to estimate σ directly from the training set {x n , c n } N n=1 .…”
Section: Methodsmentioning
confidence: 99%
See 3 more Smart Citations