2005
DOI: 10.1007/11503415_19
|View full text |Cite
|
Sign up to set email alerts
|

Fast Rates for Support Vector Machines

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

5
650
1
5

Year Published

2005
2005
2017
2017

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 376 publications
(661 citation statements)
references
References 8 publications
5
650
1
5
Order By: Relevance
“…It has been recently shown that the coefficients a i are optimal if and only if they satisfy the set of inclusions (Steinwart 2003;De Vito et al 2004):…”
Section: Algebraic Characterization For Regularized Kernel Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…It has been recently shown that the coefficients a i are optimal if and only if they satisfy the set of inclusions (Steinwart 2003;De Vito et al 2004):…”
Section: Algebraic Characterization For Regularized Kernel Methodsmentioning
confidence: 99%
“…It is worth noticing that the standard approach to SVM computation relies on quadratic programming and lacks a closed-form solution. When the loss function is non-smooth, resorting to sub-differential calculus, it has been shown that the coefficients can be characterized in terms of inclusions (Steinwart 2003;De Vito et al 2004). For regression loss functions, including -insensitive SVR (Support Vector Regression), it has been recently proven that the inclusions can be converted into a set of algebraic equations by a proper change of variable (Dinuzzo et al 2007).…”
mentioning
confidence: 99%
“…Indeed ν controls the sparsity of the solution [17], and nν is the lower bound of the number of support vectors [15]. Regarding the regularization parameter ν, the following statements hold:…”
Section: Sparsity On Example Weightsmentioning
confidence: 99%
“…3) in order to have more sparsity. Sparsity and accuracy is trade-off [17], and controlled through C and ν.…”
Section: Sparse Substructure Boosting For Regressionmentioning
confidence: 99%
“…At the same time the work elicited the problem of the growth of the testing time when a bigger training set was used, to have better recognition performances. In fact, as far as the third issue is concerned, it is well known that both the training and testing time of an SVM crucially depend on the number of samples considered [16]; as well, the number of Support Vectors (SVs) found, which determine the complexity of the solution to the problem, grows proportionally with respect to the number of samples [28]. This makes the approach unsuitable, at least so far, for on-line learning, where a potentially endless flow of data is acquired by the machine.…”
Section: Introductionmentioning
confidence: 99%