2012
DOI: 10.1214/12-ejs699
|View full text |Cite
|
Sign up to set email alerts
|

Calibrated asymmetric surrogate losses

Abstract: Surrogate losses underlie numerous state-of-the-art binary classification algorithms, such as support vector machines and boosting. The impact of a surrogate loss on the statistical performance of an algorithm is well-understood in symmetric classification settings, where the misclassification costs are equal and the loss is a margin loss. In particular, classification-calibrated losses are known to imply desirable properties such as consistency. While numerous efforts have been made to extend surrogate loss-b… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
76
0

Year Published

2014
2014
2023
2023

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 67 publications
(76 citation statements)
references
References 22 publications
0
76
0
Order By: Relevance
“…For the problem that the 0-1 loss function is neither convex nor smooth, abundant convex surrogate loss functions (most are smooth) with the classificationcalibrated property [11], [12] have been proposed. These surrogate loss functions, such as square loss, logistic loss, and hinge loss, have been proven useful in many realworld applications.…”
Section: The Traditional Classification Pr-oblemmentioning
confidence: 99%
See 1 more Smart Citation
“…For the problem that the 0-1 loss function is neither convex nor smooth, abundant convex surrogate loss functions (most are smooth) with the classificationcalibrated property [11], [12] have been proposed. These surrogate loss functions, such as square loss, logistic loss, and hinge loss, have been proven useful in many realworld applications.…”
Section: The Traditional Classification Pr-oblemmentioning
confidence: 99%
“…The method is notable because it can be applied to all the convex and classification-calibrated surrogate loss functions (If a surrogate loss function is classification-calibrated and the sample size is sufficiently large, the surrogate loss function will help learn the same optimal classifier as the 0-1 loss function does, see Theorem 1 in Bartlett et al [11]). This modification is based on the asymmetric classification-calibrated results [12] and cannot be used…”
Section: Introductionmentioning
confidence: 99%
“…In Fig.3(c), the positives lie farther from the ideal boundary than the negatives, thus we think that placing balanced margins may increase the bias of the model. Considering this issue and the viewpoint of asymmetric surrogate loss [15], we suggest the margin of positives should be larger in imbalanced classification so as to help improve the classification of positives. In all, we combine the ideas of DEC [19] and Margin Calibration [20], and introduce the asymmetric stagewise least square (ASLS) loss function in this paper:…”
Section: Asymmetric Stagewise Least Square Lossmentioning
confidence: 99%
“…Cost-sensitive learning [14] and asymmetric surrogate loss [15] are mainly used in internal methods. Support vector machines (SVMs) [16] are popular classifiers because of their remarkable generalization performance.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation