2019 IEEE/CVF International Conference on Computer Vision (ICCV) 2019
DOI: 10.1109/iccv.2019.00814
|View full text |Cite
|
Sign up to set email alerts
|

Semi-Supervised Domain Adaptation via Minimax Entropy

Abstract: Contemporary domain adaptation methods are very effective at aligning feature distributions of source and target domains without any target supervision. However, we show that these techniques perform poorly when even a few labeled examples are available in the target domain. To address this semi-supervised domain adaptation (SSDA) setting, we propose a novel Minimax Entropy (MME) approach that adversarially optimizes an adaptive few-shot model. Our base model consists of a feature encoding network, followed by… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

3
570
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 547 publications
(621 citation statements)
references
References 18 publications
(29 reference statements)
3
570
0
Order By: Relevance
“…However, as mentioned, even if global domain distribution alignment is enforced, it often leads to per-class alignment, which reduces the discriminativeness of the learned feature representation for the FSL task. Moreover, since existing UDA methods still assume that the target domain contains the same classes as the source domain, the more recent methods that focus on per-class cross-domain alignment [25]- [29] are unsuitable for our CD-FSL problem. Thus, global domain data distribution alignment [9], [14], [47] is adopted in our DPDAPN with a special mechanism introduced to prevent per-class alignment.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…However, as mentioned, even if global domain distribution alignment is enforced, it often leads to per-class alignment, which reduces the discriminativeness of the learned feature representation for the FSL task. Moreover, since existing UDA methods still assume that the target domain contains the same classes as the source domain, the more recent methods that focus on per-class cross-domain alignment [25]- [29] are unsuitable for our CD-FSL problem. Thus, global domain data distribution alignment [9], [14], [47] is adopted in our DPDAPN with a special mechanism introduced to prevent per-class alignment.…”
Section: Related Workmentioning
confidence: 99%
“…Nevertheless, a naïve combination of existing DA and FSL methods fails to offer an effective solution (see Tables 1&2) because the existing UDA methods assume that the target and source domains have identical label space. Given that they are mainly designed for distribution alignment across domains (recently focusing on per-class alignment [25]- [29]), they are intrinsically unsuited for FSL whereby the target classes are completely different from the source classes; either global or per-class distribution alignment would have a detrimental effect on class separation and model discriminativeness. How to achieve domain distribution alignment for DA while maintaining source/target perclass discriminativeness thus becomes the key to CD-FSL.…”
Section: Introductionmentioning
confidence: 99%
“…A plethora of SSDA works [18,25,36] have already emerged. For example, the [36] designs a min-max entropy minimization strategy to achieve better adaptation.…”
Section: Related Workmentioning
confidence: 99%
“…A plethora of SSDA works [18,25,36] have already emerged. For example, the [36] designs a min-max entropy minimization strategy to achieve better adaptation. Some works [18,25] consider the intra-adaptation bias but they either leverage the labeled data to learn discriminative features like [18] or minimize the entropy similarity between intra-target samples like [25].…”
Section: Related Workmentioning
confidence: 99%
“…For the semi-supervised methods, labeled source domain, a small number of annotated target samples, along with unlabeled target samples, are used in the training process. The proposed method in this paper subsumes under the umbrella of the semi-supervised HDA, in light of the prior works, such as [2,21,27], have demonstrated the effectiveness of exploiting the unlabeled target domain data.…”
Section: Introductionmentioning
confidence: 98%