Proceedings of the 2007 SIAM International Conference on Data Mining 2007
DOI: 10.1137/1.9781611972771.4
|View full text |Cite
|
Sign up to set email alerts
|

Maximum Margin Classifiers with Specified False Positive and False Negative Error Rates

Abstract: This paper addresses the problem of maximum margin classification given the moments of class conditional densities and the false positive and false negative error rates. Using Chebyshev inequalities, the problem can be posed as a second order cone programming problem. The dual of the formulation leads to a geometric optimization problem, that of computing the distance between two ellipsoids, which is solved by an iterative algorithm. The formulation is extended to non-linear classifiers using kernel methods. T… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
17
0
1

Year Published

2010
2010
2017
2017

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 33 publications
(18 citation statements)
references
References 16 publications
0
17
0
1
Order By: Relevance
“…2) Biased classification: we can vary the preferential bias for each class η 1 and η 2 instead of varying the misclassification costs and try to find a maximummargin hyperplane in the biased classification framework (B-SOCP) [17].…”
Section: Unbalanced Datamentioning
confidence: 99%
See 1 more Smart Citation
“…2) Biased classification: we can vary the preferential bias for each class η 1 and η 2 instead of varying the misclassification costs and try to find a maximummargin hyperplane in the biased classification framework (B-SOCP) [17].…”
Section: Unbalanced Datamentioning
confidence: 99%
“…They are used in a variety of settings such as feature selection [3], dealing with missing features [23], classification and ordinal regression algorithms that scale to large datasets [18], and formulations to deal with unbalanced data [17,10]. In this work, we give a scalable cost-sensitive formulation based on chance-constraints which satisfies all the requirements needed for learning a good link predictor mentioned above and show how it can be used for link prediction to significantly improve performance.…”
Section: Introductionmentioning
confidence: 99%
“…Thus the deployment of the parallelization is extreme simple and require few memory copy operations among computational nodes. Support vector machines [5] and the related max margin based models [23], [19] are robust and accurate classification algorithm, and widely used in many areas. Massive successful efforts have been put on solving SVM problems [13], [21], [14].…”
Section: Introductionmentioning
confidence: 99%
“…The authors in [4] study Bayesian risk analysis and replace the quadratic loss-function with an asymmetric loss-function to derive a general class of functions which approach infinity near the origin to limit underestimates. In [5], the authors presents a maximum margin classifier which bounds misclassification for each class differently thus allowing for different tolerances levels. In [6], the authors use a smoothing strategy to modify the typical SVR approach into a non-constrained problem thereby only solving a system of linear equations rather than a convex quadratic program.…”
Section: Introductionmentioning
confidence: 99%