2021
DOI: 10.48550/arxiv.2106.00445
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Sample Selection with Uncertainty of Losses for Learning with Noisy Labels

Xiaobo Xia,
Tongliang Liu,
Bo Han
et al.

Abstract: In learning with noisy labels, the sample selection approach is very popular, which regards small-loss data as correctly labeled during training. However, losses are generated on-the-y based on the model being trained with noisy labels, and thus large-loss data are likely but not certainly to be incorrect. There are actually two possibilities of a large-loss data point: (a) it is mislabeled, and then its loss decreases slower than other data, since deep neural networks "learn patterns rst"; (b) it belongs to a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
11
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(11 citation statements)
references
References 25 publications
0
11
0
Order By: Relevance
“…Many historical parameters are used in the loss calculation and update process, therefore loss itself could be regarded as a historical collection. Some studies have used the unique properties of the loss to identify anomalous samples [139] or biased samples [140]- [145]. For instance, Chu et al [139] use loss profiles as an informative cue for detecting anomalies, while [140] and [141] record the loss curve of each sample as an additional attribute to help identify the biased samples.…”
Section: Publication Methodsmentioning
confidence: 99%
See 4 more Smart Citations
“…Many historical parameters are used in the loss calculation and update process, therefore loss itself could be regarded as a historical collection. Some studies have used the unique properties of the loss to identify anomalous samples [139] or biased samples [140]- [145]. For instance, Chu et al [139] use loss profiles as an informative cue for detecting anomalies, while [140] and [141] record the loss curve of each sample as an additional attribute to help identify the biased samples.…”
Section: Publication Methodsmentioning
confidence: 99%
“…Inside, cyclical training records the historical loss of each sample, then calculates and ranks every sample's normalized average loss to recognize the noisy data with a higher score. Later, some works introduce the value of historical loss to determine whether the data is with noisy label [142]- [145]. To achieve a better performance of cleaning samples, Xu et al [142] also use the historical loss distribution to handle the issue of varying scales of loss distributions across different training periods.…”
Section: Publication Methodsmentioning
confidence: 99%
See 3 more Smart Citations