2022
DOI: 10.1007/978-3-031-19806-9_2
|View full text |Cite
|
Sign up to set email alerts
|

Centrality and Consistency: Two-Stage Clean Samples Identification for Learning with Instance-Dependent Noisy Labels

Abstract: Deep models trained with noisy labels are prone to overfitting and struggle in generalization. Most existing solutions are based on an ideal assumption that the label noise is class-conditional, i.e. instances of the same class share the same noise model, and are independent of features. While in practice, the real-world noise patterns are usually more fine-grained as instance-dependent ones, which poses a big challenge, especially in the presence of inter-class imbalance. In this paper, we propose a two-stage… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
0
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(1 citation statement)
references
References 32 publications
0
0
0
Order By: Relevance
“…In the context of learning from noisy labels, the key concept is to leverage these dynamics as criteria for identifying and separating noisy samples. Several works propose to identify samples with lower training loss as the clean subset Han et al, 2018;Jiang et al, 2018;Zhao et al, 2022;Wang et al, 2023b), however, they are generally simplistic and inflexible, resulting in the selection of only easy samples. To address this limitation, alternative approaches have been proposed to effectively utilize the loss or confidences during training, as demonstrated in (Zhang et al, 2021a) and (Nishi et al, 2021).…”
Section: Confidence-guided Sample Separationmentioning
confidence: 99%
“…In the context of learning from noisy labels, the key concept is to leverage these dynamics as criteria for identifying and separating noisy samples. Several works propose to identify samples with lower training loss as the clean subset Han et al, 2018;Jiang et al, 2018;Zhao et al, 2022;Wang et al, 2023b), however, they are generally simplistic and inflexible, resulting in the selection of only easy samples. To address this limitation, alternative approaches have been proposed to effectively utilize the loss or confidences during training, as demonstrated in (Zhang et al, 2021a) and (Nishi et al, 2021).…”
Section: Confidence-guided Sample Separationmentioning
confidence: 99%