2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019
DOI: 10.1109/cvpr.2019.00521
|View full text |Cite
|
Sign up to set email alerts
|

Label Propagation for Deep Semi-Supervised Learning

Abstract: Semi-supervised learning is becoming increasingly important because it can combine data carefully labeled by humans with abundant unlabeled data to train deep neural networks. Classic methods on semi-supervised learning that have focused on transductive learning have not been fully exploited in the inductive framework followed by modern deep learning. The same holds for the manifold assumption-that similar examples should get the same prediction. In this work, we employ a transductive label propagation method … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

5
459
1

Year Published

2020
2020
2020
2020

Publication Types

Select...
4
4
2

Relationship

0
10

Authors

Journals

citations
Cited by 563 publications
(501 citation statements)
references
References 32 publications
5
459
1
Order By: Relevance
“…The same principle is adopted by Shi et al [29], where the authors further add contrastive loss to the consistency loss. Iscen et al [30] employ a transductive label propagation method that is based on the manifold assumption to make predictions on the entire dataset and use these predictions to generate pseudo-labels for the unlabeled data and train a deep neural network.…”
Section: Deep Semi-supervised Learningmentioning
confidence: 99%
“…The same principle is adopted by Shi et al [29], where the authors further add contrastive loss to the consistency loss. Iscen et al [30] employ a transductive label propagation method that is based on the manifold assumption to make predictions on the entire dataset and use these predictions to generate pseudo-labels for the unlabeled data and train a deep neural network.…”
Section: Deep Semi-supervised Learningmentioning
confidence: 99%
“…SSL is a transversal task for different domains including images [6], audio [12], time series [13], and text [14]. Recent approaches in image classification primarily focus on exploiting the consistency in the predictions for the same sample under different perturbations (consistency regularization) [11,15], while other approaches directly generate labels for the unlabeled data to guide the learning process (pseudo-labeling) [16,17]. These two alternatives differ importantly in the mechanism they use to exploit unlabeled samples.…”
Section: Introductionmentioning
confidence: 99%
“…The semi-supervised deep learning model generates three modules to exploit unlabeled data by considering model initialization, diversity augmentation, and pseudo-label editing. Graphbased transduction approach that works through the propagation of few labels, called label propagation, was used in [17] to improve the classification performances and obtain estimated labels. This method consists of two steps.…”
Section: Semi-supervised Learning Defense Algorithmsmentioning
confidence: 99%