2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021
DOI: 10.1109/cvpr46437.2021.00253
|View full text |Cite
|
Sign up to set email alerts
|

Cross-Domain Adaptive Clustering for Semi-Supervised Domain Adaptation

Abstract: In semi-supervised domain adaptation, a few labeled samples per class in the target domain guide features of the remaining target samples to aggregate around them. However, the trained model cannot produce a highly discriminative feature representation for the target domain because the training data is dominated by labeled samples from the source domain. This could lead to disconnection between the labeled and unlabeled target samples as well as misalignment between unlabeled target samples and the source doma… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
62
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 80 publications
(63 citation statements)
references
References 29 publications
0
62
0
Order By: Relevance
“…Inspired by previous works [32,88,93] in domain adaptation, we add FixMatch [63] to the existing method to construct a stronger baseline. Specifically, let T (x) and T (x) denote the weakly and strongly augmented views for x ∈ D tu , respectively.…”
Section: Methods 41 a Stronger Baselinementioning
confidence: 99%
See 2 more Smart Citations
“…Inspired by previous works [32,88,93] in domain adaptation, we add FixMatch [63] to the existing method to construct a stronger baseline. Specifically, let T (x) and T (x) denote the weakly and strongly augmented views for x ∈ D tu , respectively.…”
Section: Methods 41 a Stronger Baselinementioning
confidence: 99%
“…[83] breaks down the SSDA problem into two sub-problems, namely, semi-supervised problem and UDA problem, and then proposes to learn consistent predictions using co-training. CDAC [32] proposes an adversarial adaptive clustering loss to group features of unlabeled target data into clusters and perform cluster-wise feature alignment across domains. CLDA [62] employs class-wise contrastive learning to reduce the inter-domain gap and instance level contrastive alignment to minimize the intra-domain discrepancy.…”
Section: Semi-supervised Domain Adaptationmentioning
confidence: 99%
See 1 more Smart Citation
“…Thus it combines the advantages of both consistency regularization and pseudo-labeling (or self-training). This approach has been applied in many domains, such as semi-supervised learning (SSL) [2,3,36,50], unsupervised learning (USL) [8,12], unsupervised domain adaptation (UDA) [6,35], and semi-supervised domain adaptation (SSDA) [21,22], all of which prove the effectiveness of consistency training in learning high-quality representations from label-scarce data. More recently, there are some works extending consistency training into other tasks, such as unsupervised domain adaptation for image segmentation [27] and semi-supervised 3D object detection [43].…”
Section: Related Workmentioning
confidence: 99%
“…Recently, consistency training is acknowledged as a powerful algorithmic paradigm for robust learning from label-scarce data, e.g. in unsupervised/semi-supervised learning [8,12,36,50] and unsupervised/semi-supervised domain adaptation [6,21,22,35]. It works by forcing the model to make consistent prediction under different perturbations/augmentations to the input sample (named as different views) and the prediction in one view usually serve as the pseudo-label of the other view.…”
Section: Introductionmentioning
confidence: 99%