2005
DOI: 10.1109/tit.2004.839514
|View full text |Cite
|
Sign up to set email alerts
|

Consistency of Support Vector Machines and Other Regularized Kernel Classifiers

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

4
159
0
1

Year Published

2005
2005
2018
2018

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 164 publications
(164 citation statements)
references
References 37 publications
4
159
0
1
Order By: Relevance
“…3, the regularization parameter λ = (m ∧ n) −0.9 guarantees the statistical consistency of KuLSIF and KL-div under mild assumptions. Statistical properties of KLR have been studied by Bartlett et al (2006), Bartlett and Tewari (2007), Steinwart (2005), and Park (2009). Especially, Steinwart (2005) has proved that, under mild assumptions, KLR with λ = (m+n) −0.9 has the statistical consistency.…”
Section: Statistical Performancementioning
confidence: 99%
See 1 more Smart Citation
“…3, the regularization parameter λ = (m ∧ n) −0.9 guarantees the statistical consistency of KuLSIF and KL-div under mild assumptions. Statistical properties of KLR have been studied by Bartlett et al (2006), Bartlett and Tewari (2007), Steinwart (2005), and Park (2009). Especially, Steinwart (2005) has proved that, under mild assumptions, KLR with λ = (m+n) −0.9 has the statistical consistency.…”
Section: Statistical Performancementioning
confidence: 99%
“…Statistical properties of KLR have been studied by Bartlett et al (2006), Bartlett and Tewari (2007), Steinwart (2005), and Park (2009). Especially, Steinwart (2005) has proved that, under mild assumptions, KLR with λ = (m+n) −0.9 has the statistical consistency. When the training samples are balanced, i.e., the ratio of sample size m/n converges to a positive constant, the regularization parameter λ = (m ∧ n) −0.9 guarantees the statistical consistency of KLR.…”
Section: Statistical Performancementioning
confidence: 99%
“…This handles all of the cases shown in Figure 1 except the support vector machine. Steinwart (2002) has demonstrated consistency for the support vector machine as well, in a general setting where F is taken to be a reproducing kernel Hilbert space, and φ is assumed continuous. Other results on Bayes-risk consistency have been presented by Breiman (2000), Jiang (2003), Mannor and Meir (2001), and Mannor et al (2002).…”
Section: Introductionmentioning
confidence: 98%
“…They showed that if the start and end vertex kernels have the so-called universality property (e.g. Gaussian kernel) [32], then the resulting Kronecker edge kernel is also universal, resulting in universal consistency when training kernel-based learning algorithms, such as support vector machines and ridge regression using the kernel.…”
Section: Related Workmentioning
confidence: 99%