“…Most of these works improve the classical selflearning algorithm on not overfitting the pseudo-label errors, by using teacher-student or ensemble of expert models [13,64,61] while other approaches focus on designing efficient sample selection and outlier detection strategies [54,3]. More robust frameworks are also designed by optimizing losses based on distance distributions [20,27], by leveraging local features [11], intra-inter camera features [53,25], the labeled source samples [8], multiple cluster views [10] or attentionbased model [19], or by mixing pseudo-labels with domaintranslation methods [60,48,71,4], online pseudo-label refinery strategy, temporal ensembling and label propagation [62,66] or meta learning [55]. A recent approach, SpCL [14], proposed self-contrastive learning during the training phase, by leveraging the source and target samples.…”