2011
DOI: 10.1287/opre.1100.0854
|View full text |Cite
|
Sign up to set email alerts
|

Support Vector Machines with the Ramp Loss and the Hard Margin Loss

Abstract: In the interest of deriving classifiers that are robust to outlier observations, we present integer programming formulations of Vapnik's support vector machine (SVM) with the ramp loss and hard margin loss. The ramp loss allows a maximum error of 2 for each training observation, while the hard margin loss calculates error by counting the number of training observations that are misclassified outside of the margin. SVM with these loss functions is shown to be a consistent estimator when used with certain kernel… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
137
0
1

Year Published

2013
2013
2022
2022

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 124 publications
(148 citation statements)
references
References 26 publications
1
137
0
1
Order By: Relevance
“…Coordinate descent algorithm for ramp-LPSVM. ⋄ Express and solve the corresponding subproblem (6). Record the optimum z ⋆ k,j ;…”
Section: Coordinate Descent Algorithmmentioning
confidence: 99%
See 3 more Smart Citations
“…Coordinate descent algorithm for ramp-LPSVM. ⋄ Express and solve the corresponding subproblem (6). Record the optimum z ⋆ k,j ;…”
Section: Coordinate Descent Algorithmmentioning
confidence: 99%
“…Recall the subproblem (6). Unlike a general nonlinear optimization problem, the one-variable piecewise linearity makes it convenient to be solved.…”
Section: Solving Subproblemsmentioning
confidence: 99%
See 2 more Smart Citations
“…Thus, a classifier more robust against outliers is obtained, [241]. In [38], a nonlinear mixed-integer formulation of the SVM with ramp loss is proposed, where a binary variable is introduced for each training object equal to 1 if the object is misclassified outside the margin and 0 otherwise. The ramp loss has also been advocated in [213,66] with the motivation of generating less support vectors than the SVM with hinge loss.…”
Section: Departing From Convexity In Svmmentioning
confidence: 99%