2019 IEEE/CVF International Conference on Computer Vision (ICCV) 2019
DOI: 10.1109/iccv.2019.01004
|View full text |Cite
|
Sign up to set email alerts
|

Cluster Alignment With a Teacher for Unsupervised Domain Adaptation

Abstract: Deep learning methods have shown promise in unsupervised domain adaptation, which aims to leverage a labeled source domain to learn a classifier for the unlabeled target domain with a different distribution. However, such methods typically learn a domain-invariant representation space to match the marginal distributions of the source and target domains, while ignoring their fine-level structures. In this paper, we propose Cluster Alignment with a Teacher (CAT) for unsupervised domain adaptation, which can effe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
121
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 205 publications
(132 citation statements)
references
References 33 publications
0
121
0
Order By: Relevance
“…We empirically observe that for z t with arg max k p t k = k or z s with its true label y s = k, they are learned to drift together with its cluster centroid c k in the feature space, rather than to collapse to c k . In some cases, the class-wise distances between the source and target domains are even getting larger by applying (8) and (10), suggesting that the SRGenC objective (12) is indeed modulating the feature space learning via generative clustering, which is in contrast to existing methods [4], [5], [6], [8], [38], [48] that explicitly align the features across the two domains. Results of these empirical studies are presented in Fig.…”
Section: Structural Source Regularization By Learning a Common Set Ofmentioning
confidence: 99%
“…We empirically observe that for z t with arg max k p t k = k or z s with its true label y s = k, they are learned to drift together with its cluster centroid c k in the feature space, rather than to collapse to c k . In some cases, the class-wise distances between the source and target domains are even getting larger by applying (8) and (10), suggesting that the SRGenC objective (12) is indeed modulating the feature space learning via generative clustering, which is in contrast to existing methods [4], [5], [6], [8], [38], [48] that explicitly align the features across the two domains. Results of these empirical studies are presented in Fig.…”
Section: Structural Source Regularization By Learning a Common Set Ofmentioning
confidence: 99%
“…To overcome the lack of target semantic supervision, other approaches [67,16] resort to pseudo-labels directly from network predictions to discover target class-conditional structures in the feature space. Those structures are then exploited to perform a within-domain feature clusterization [16] and cross-domain feature alignment by centroid matching [67,16]. Starting from analogous premises, we extend a similar form of inter and intra class adaptation to the semantic segmentation scenario, by introducing additional modules that help to address the inherent increased complexity.…”
Section: Related Workmentioning
confidence: 99%
“…In addition, Xu et al [15] proposed to adapt feature norms of both domains to achieve equilibrium, which is free from relationship of the label spaces. Recently, some methods apply ensemble learning to improve the discrimination ability of extracted features [37], [38].…”
Section: B Partial Domain Adaptationmentioning
confidence: 99%
“…French et al [37] used a mean teacher model [39] to mine the target domain knowledge. Deng et al [38] explored the classconditional structure of the target domain with an ensemble teacher model.…”
Section: B Partial Domain Adaptationmentioning
confidence: 99%