2021
DOI: 10.1609/aaai.v35i8.16852
|View full text |Cite
|
Sign up to set email alerts
|

Curriculum Labeling: Revisiting Pseudo-Labeling for Semi-Supervised Learning

Abstract: In this paper we revisit the idea of pseudo-labeling in the context of semi-supervised learning where a learning algorithm has access to a small set of labeled samples and a large set of unlabeled samples. Pseudo-labeling works by applying pseudo-labels to samples in the unlabeled set by using a model trained on combination of the labeled samples and any previously pseudo-labeled samples, and iteratively repeating this process in a self-training cycle. Current methods seem to have abandoned this approach in fa… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
22
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 132 publications
(39 citation statements)
references
References 25 publications
0
22
0
Order By: Relevance
“…[34], [35], [36] introduce label propagation for the pseudolabeling. And [37] combines self-training with curriculum learning.…”
Section: Self-trainingmentioning
confidence: 99%
“…[34], [35], [36] introduce label propagation for the pseudolabeling. And [37] combines self-training with curriculum learning.…”
Section: Self-trainingmentioning
confidence: 99%
“…However, pseudolabels are prone to concept drift and confirmation bias, where early mislabeled samples lead to accumulating errors. Curriculum labeling [5] mitigates this using a refined training strategy. Noisy student [55] demonstrated state-ofthe-art results on ImageNet [27] using self-training and distillation on a large set of unlabeled images, by iterative relabeling data and using increasingly larger student models.…”
Section: Related Workmentioning
confidence: 99%
“…We show in Sec. 4.2 that simply considering expected positives as positives leads to unsatisfactory results, possibly due to label drift of those pseudo-labels, where early mislabeled samples lead to accumulating errors [5]. Our expected negative (EN) only applies a binary-cross entropy loss on annotated positives and the set of expected negatives, ignoring the expected positive labels in the loss.…”
Section: Expected-negative Loss (En)mentioning
confidence: 99%
“…Semi-supervised learning (SSL) involves two typical paradigms: consistency regularization [3,48,73] and entropy minimization [5,6,27,49]. Consistency regularization forces the model to produce stable and consistent predictions on the same unlabeled data under various perturbations [71].…”
Section: Related Workmentioning
confidence: 99%