2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2020
DOI: 10.1109/cvpr42600.2020.01341
|View full text |Cite
|
Sign up to set email alerts
|

Density-Aware Graph for Deep Semi-Supervised Visual Recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
21
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
4

Relationship

1
8

Authors

Journals

citations
Cited by 28 publications
(22 citation statements)
references
References 13 publications
0
21
0
Order By: Relevance
“…FixMatch (Sohn et al 2020) provides a simplified version where pseudo-labeling is used instead of distribution sharpening, without the need for additional tricks such as distribution alignment or augmentation anchoring (i.e., using more than one weak and one strong augmented version) from ReMixMatch or training signal annealing from UDA. Additionally, similar unlabeled images can be encouraged to have consistent pseudo-labels (Hu, Yang, and Nevatia 2021), or pseudo-labels can be propagated via a similarity graph (Li et al 2020) or centroids (Han et al 2021). Our method extends FixMatch by leveraging a self-supervised loss in cases where the pseudo-label is un-confident, allowing to perform barely-supervised learning in realistic settings.…”
Section: Semi-supervised Learningmentioning
confidence: 99%
“…FixMatch (Sohn et al 2020) provides a simplified version where pseudo-labeling is used instead of distribution sharpening, without the need for additional tricks such as distribution alignment or augmentation anchoring (i.e., using more than one weak and one strong augmented version) from ReMixMatch or training signal annealing from UDA. Additionally, similar unlabeled images can be encouraged to have consistent pseudo-labels (Hu, Yang, and Nevatia 2021), or pseudo-labels can be propagated via a similarity graph (Li et al 2020) or centroids (Han et al 2021). Our method extends FixMatch by leveraging a self-supervised loss in cases where the pseudo-label is un-confident, allowing to perform barely-supervised learning in realistic settings.…”
Section: Semi-supervised Learningmentioning
confidence: 99%
“…In term of pseudo labeling and leveraging the unlabeled data, we will briefly review the semi-/self-supervised methods here. The semisupervised methods include consistency regularization [46,2,43,32], entropy minimization [16,37], pseudo labeling [20,21] et al However, conventional pseudo labeling strategy works under the hypothesis that the unlabeled are of the same class space as the labeled. In another research line, the self-supervised learning attempts to learn purely on unlabeled data [31,15] or serve as an auxiliary supervision on training data [14,55].…”
Section: Related Workmentioning
confidence: 99%
“…In a completely different direction to network predictions, it has been shown from a classical perspective [25] that energy based models such as graphs are well suited to the task of label propagation. Therefore, several works [33], [37], [38] have shown good performance by iteratively feeding the feature representation of a neural network to a graph, performing pseudo-label generation on the graph and then using those labels to train the network. However, graphical approaches have yet to show that they can produce state-of-the-art results compared to model based approaches such as [12], [13].…”
Section: B Pseudo-labelling Techniquesmentioning
confidence: 99%
“…1) Methods which used the 13-CNN architecture [32]: Π-Model [30], Mean Teacher(MT) [32], Virtual Adversarial Training (VAT) [31], Label Propogation for Deep Semi-Supervised Learning (LP) [33], Smooth Neighbors on Teacher Graphs (SNTG) [42], Stochastic Weight Averaging(SWA) [43], Interpolation Consistency Training (ICT) [21], Dual Student [44], Transductive Semi-Supervised Deep Learning(TSSDL) [35], Density-Aware Graphs (DAG) [38] and Pseudo-Label Mixup [20]. Unfortunately, due to the natural progress in the field, each paper has different implementation choices which are not standardised.…”
Section: Evaluation Protocolmentioning
confidence: 99%