2019
DOI: 10.48550/arxiv.1912.08766
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

RealMix: Towards Realistic Semi-Supervised Deep Learning Algorithms

Abstract: Semi-Supervised Learning (SSL) algorithms have shown great potential in training regimes when access to labeled data is scarce but access to unlabeled data is plentiful. However, our experiments illustrate several shortcomings that prior SSL algorithms suffer from. In particular, poor performance when unlabeled and labeled data distributions differ. To address these observations, we develop RealMix, which achieves state-of-the-art results on standard benchmark datasets across different labeled and unlabeled se… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
17
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(18 citation statements)
references
References 5 publications
1
17
0
Order By: Relevance
“…We also conduct experiments on Tiny ImageNet 1 to verify the performance of our method on a larger dataset. Our method is compared against Mix-Match (Berthelot et al, 2019b), RealMix (Nair et al, 2019), ReMixMatch (Berthelot et al, 2019a), and FixMatch (Sohn et al, 2020). As recommended by Oliver et al (2018), all methods should be implemented using the same codebase.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…We also conduct experiments on Tiny ImageNet 1 to verify the performance of our method on a larger dataset. Our method is compared against Mix-Match (Berthelot et al, 2019b), RealMix (Nair et al, 2019), ReMixMatch (Berthelot et al, 2019a), and FixMatch (Sohn et al, 2020). As recommended by Oliver et al (2018), all methods should be implemented using the same codebase.…”
Section: Methodsmentioning
confidence: 99%
“…As presented in Section 4, we evaluate our method against four methods: MixMatch (Berthelot et al, 2019b), RealMix (Nair et al, 2019), ReMixMatch (Berthelot et al, 2019a), and FixMatch (Sohn et al, 2020). The comparison of the methods is shown in Table 6.…”
Section: B Comparison Of Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…For semi-supervised learning, it is all about learning the underlying structure of a large amount of unlabeled data [30]. So far, we only rely on the phylogenetic tree to build a common label space shared by in-distribution and out-of-distribution data.…”
Section: Naive Pseudo-labeling Via Relation Predictionmentioning
confidence: 99%
“…Additional unlabelled observations are generated by perturbing the original set of unlabelled observations by adding random noise or transformations such as rotations or translations. Data augmentation is typically coupled with consistency regularization so that similar predictions are encouraged on the original instances and the augmented versions (Berthelot et al, 2019;Nair et al, 2019;Wei et al, 2021). The combined use of small local alterations and more aggressive global changes has been found to be an effective strategy (Sohn et al, 2020).…”
Section: Brief Overview Of Ssl Approachesmentioning
confidence: 99%