2015
DOI: 10.48550/arxiv.1505.07634
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Learning with Symmetric Label Noise: The Importance of Being Unhinged

Abstract: Convex potential minimisation is the de facto approach to binary classification. However, Long and Servedio [2010] proved that under symmetric label noise (SLN), minimisation of any convex potential over a linear function class can result in classification performance equivalent to random guessing. This ostensibly shows that convex losses are not SLN-robust. In this paper, we propose a convex, classificationcalibrated loss and prove that it is SLN-robust. The loss avoids the Long and Servedio [2010] result b… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(7 citation statements)
references
References 9 publications
(15 reference statements)
0
7
0
Order By: Relevance
“…Following [14,16], we shuffle the labels of the training set by a noise transition matrix Q, where Q i j = Pr[ ỹ = j|y = i] denotes the probability of flipping class i to j. The widely-used structures of Q, i.e., symmetry flipping [15,23] and pair flipping [3], are adopted in our study. Note that, consistent to [3,15,23], we validate our NIB module with different noise ratios, denoting as 'symmetry-10%', 'symmetry-20%', 'symmetry-40%' and 'pair-10%'.…”
Section: Experimental Settingsmentioning
confidence: 99%
See 1 more Smart Citation
“…Following [14,16], we shuffle the labels of the training set by a noise transition matrix Q, where Q i j = Pr[ ỹ = j|y = i] denotes the probability of flipping class i to j. The widely-used structures of Q, i.e., symmetry flipping [15,23] and pair flipping [3], are adopted in our study. Note that, consistent to [3,15,23], we validate our NIB module with different noise ratios, denoting as 'symmetry-10%', 'symmetry-20%', 'symmetry-40%' and 'pair-10%'.…”
Section: Experimental Settingsmentioning
confidence: 99%
“…The widely-used structures of Q, i.e., symmetry flipping [15,23] and pair flipping [3], are adopted in our study. Note that, consistent to [3,15,23], we validate our NIB module with different noise ratios, denoting as 'symmetry-10%', 'symmetry-20%', 'symmetry-40%' and 'pair-10%'. For example, the 'symmetry-10%' represents that 10% of the labels have been symmetrically flipped to be noisy labels.…”
Section: Experimental Settingsmentioning
confidence: 99%
“…In this way, adapting the meta-model on the support set causes inaccurate task-specific parameters and leads to negative impacts on the meta-training process. Specifically, for miniImagenet, we apply the symmetry flipping on the labels of the support set [27]. The default ratio of noisy tasks is set as 0.6.…”
Section: Meta-learning With Noisementioning
confidence: 99%
“…Much effort has also been devoted to the sampling algorithms to overcome the data imbalance, such as SMOTE [56], and ADASYN [57]. As for the relabeling method, [58], [59] uniformly change the label to other classes with a constant flip probability, and the trained models appear to be more resistant to the label noise. In contrast, our proposed method adaptive refined labeling is designed to adaptively refine the similar samples with opposite labels to boost the performance of predictors in the quantitative trading scenarios.…”
Section: Related Workmentioning
confidence: 99%