2018
DOI: 10.1109/tpami.2017.2772235
|View full text |Cite
|
Sign up to set email alerts
|

Linear Maximum Margin Classifier for Learning from Uncertain Data

Abstract: In this paper, we propose a maximum margin classifier that deals with uncertainty in data input. More specifically, we reformulate the SVM framework such that each training example can be modeled by a multi-dimensional Gaussian distribution described by its mean vector and its covariance matrix-the latter modeling the uncertainty. We address the classification problem and define a cost function that is the expected value of the classical SVM cost when data samples are drawn from the multi-dimensional Gaussian … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
33
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 36 publications
(34 citation statements)
references
References 40 publications
(78 reference statements)
0
33
0
Order By: Relevance
“…The Local Binary Pattern (LBP) feature and Speeded-Up Robust Feature (SURF) feature are extracted from object images. Then the proposed method is compared with three other sample uncertainty modeling methods, including uniform distribution [50], same Gaussian distribution [34], and features without considering the sample uncertainty with the same classifier SVM. The SVM hyper-parameters: penalty coefficient C=0.5, kernel function is radial basis function (RBF), the RBF coefficient gamma is 0.001.…”
Section: A Results Of Dgsmentioning
confidence: 99%
See 2 more Smart Citations
“…The Local Binary Pattern (LBP) feature and Speeded-Up Robust Feature (SURF) feature are extracted from object images. Then the proposed method is compared with three other sample uncertainty modeling methods, including uniform distribution [50], same Gaussian distribution [34], and features without considering the sample uncertainty with the same classifier SVM. The SVM hyper-parameters: penalty coefficient C=0.5, kernel function is radial basis function (RBF), the RBF coefficient gamma is 0.001.…”
Section: A Results Of Dgsmentioning
confidence: 99%
“…The Wisconsin Diagnostic Breast Cancer (WDBC) dataset contains 569 examples, which can be divided into two classes: malignant (212 instances) and benign (357 instances The proposed method is compared with baseline SVM, power SVM [37], SVM-GSU [34], GEP-SVM [10] and FLST-SVM [9]. Because WDBC data does not provide a division in training and testing subsets, the dataset is divided into training subset (90%) and testing subset (10%).…”
Section: ) Wisconsin Diagnostic Breast Cancer Classificationmentioning
confidence: 99%
See 1 more Smart Citation
“…The authors concluded that the interpretability of brain decoding cannot be improved using regularization. The problem is primarily caused by the decoding process per se , where it minimizes the classification error only considering the uncertainty in the output space (Zhang, 2005; Aggarwal and Yu, 2009; Tzelepis et al, 2015) and not the uncertainty in the input space (or noise). Our experimental results on the toy data (see Section 3.1) shows that if the right criterion is used for selecting the best values for hyper-parameters, appropriate choice of the regularization strategy can still play a significant role in improving the interpretability of results.…”
Section: Discussionmentioning
confidence: 99%
“…Haufe et al [39] demonstrated that the weight in linear discriminative models are unable to accurately assess the relationship between independent variables, primarily because of the contribution of noise in the decoding process. The problem is primarily caused by the decoding process that minimizes the classification error only considering the uncertainty in the output space [80, 98, 99] and not the uncertainty in the input space (or noise). The authors concluded that the interpretability of brain decoding cannot be improved using regularization.…”
Section: Discussionmentioning
confidence: 99%