Computational Optimization 1999
DOI: 10.1007/978-1-4615-5197-3_5
|View full text |Cite
|
Sign up to set email alerts
|

Multicategory Classification by Support Vector Machines

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
144
0
1

Year Published

2002
2002
2018
2018

Publication Types

Select...
7
3

Relationship

0
10

Authors

Journals

citations
Cited by 143 publications
(145 citation statements)
references
References 13 publications
0
144
0
1
Order By: Relevance
“…The idea of keeping the α's constant to compute D J can be extended to the multi-class problem (Bredensteiner, 1999) other kernel methods such as KPCA (Schölkopf, 1998), non-classification problems such as regression, density estimation (see e.g. Vapnik, 1998) and clustering (Ben-Hur, 2000).…”
Section: Generalization Of Svm Rfe To the Non-linear Case And Other Kmentioning
confidence: 99%
“…The idea of keeping the α's constant to compute D J can be extended to the multi-class problem (Bredensteiner, 1999) other kernel methods such as KPCA (Schölkopf, 1998), non-classification problems such as regression, density estimation (see e.g. Vapnik, 1998) and clustering (Ben-Hur, 2000).…”
Section: Generalization Of Svm Rfe To the Non-linear Case And Other Kmentioning
confidence: 99%
“…In contrast, other one-from-the-rest and SVM k-class classifiers (Bottou et al, 1994;Bennett & Mangasarian, 1993;Bredensteiner & Bennett, 1999) require the solution of either a large single or k smaller quadratic or linear programs that need specialized optimization codes such as CPLEX (1992). On the other hand, obtaining a linear or nonlinear PSVM classifier as we propose here, requires nothing more sophisticated than solving k systems of linear equations.…”
Section: Introductionmentioning
confidence: 99%
“…As was pointed out in Section 2.2, it was independently introduced by other researchers under various forms (see for instance Vapnik, 1998;Bredensteiner and Bennett, 1999). If we reformulate its learning problem as an instance of Problem 2, then the corresponding loss function WW is given by:…”
Section: Characterization Of the Four Main M-svmsmentioning
confidence: 99%