“…Semi-Supervised Learning (SSL) trains models by leveraging both labeled and unlabeled data, and the general idea can be divided into two groups by recent works [43,52,57], i.e., consistency regularization [5,6,13,20,23,30,36,49] and pseudo-labeling [1,3,15,18,42,44]. The first encourages consistent predictions on different augmented versions of the same image, such as virtual adversarial perturbations [30], image mix-up [6,55], grid-masking [8], and even an ensemble of all popular augmentations [5,10,36].…”