2006
DOI: 10.1007/s10444-004-7206-2
|View full text |Cite
|
Sign up to set email alerts
|

Approximation with polynomial kernels and SVM classifiers

Abstract: Dedicated to Charlie Micchelli on the occasion of his 60th birthdayThis paper presents an error analysis for classification algorithms generated by regularization schemes with polynomial kernels. Explicit convergence rates are provided for support vector machine (SVM) soft margin classifiers. The misclassification error can be estimated by the sum of sample error and regularization error. The main difficulty for studying algorithms with polynomial kernels is the regularization error which involves deeply the d… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
77
0

Year Published

2006
2006
2021
2021

Publication Types

Select...
5
2
2

Relationship

1
8

Authors

Journals

citations
Cited by 124 publications
(79 citation statements)
references
References 24 publications
1
77
0
Order By: Relevance
“…Such trees yield faster rates but they are computationally prohibitive. Recent risk bounds for polynomial-kernel support vector machines may offer a computationally tractable alternative to this approach [19]. One way or another, we feel that dyadic decision trees, or possibly new variants thereof, hold promise to address these issues.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Such trees yield faster rates but they are computationally prohibitive. Recent risk bounds for polynomial-kernel support vector machines may offer a computationally tractable alternative to this approach [19]. One way or another, we feel that dyadic decision trees, or possibly new variants thereof, hold promise to address these issues.…”
Section: Discussionmentioning
confidence: 99%
“…Other authors derive rates for existing practical discrimination rules, but these works are not comparable to ours, since different distributional assumptions or loss functions are considered. [15], [16], [17], [18], [19].…”
Section: B Rates Of Convergence In Classificationmentioning
confidence: 99%
“…The main advantage of this multi-kernel algorithm is to improve the regularization error by using varying hypothesis spaces (see [31,16,32]). The error analysis for this multi-kernel setting can be done in the same way if the covering number of σ∈Σ f ∈ H K σ : f K σ ≤ 1 satisfies (1.8).…”
Section: Recall the Set W(r) Defined By (42) Proposition 42 Immedimentioning
confidence: 99%
“…(see, e.g. [2], [3], [5], [27] ) Let V be the p-norm hinge loss function in (6) and f z,λ be defined by (16). Then, for any f ∈ (H¯t m K m ) k , E(f z,λ ) − E(f M ) can be bounded by…”
Section: The Approximation Errormentioning
confidence: 99%