2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition 2018
DOI: 10.1109/cvpr.2018.00417
|View full text |Cite
|
Sign up to set email alerts
|

Deep Cocktail Network: Multi-source Unsupervised Domain Adaptation with Category Shift

Abstract: Unsupervised domain adaptation (UDA) conventionally assumes labeled source samples coming from a single underlying source distribution. Whereas in practical scenario, labeled data are typically collected from diverse sources. The multiple sources are different not only from the target but also from each other, thus, domain adaptater should not be modeled in the same way. Moreover, those sources may not completely share their categories, which further brings a new transfer challenge called category shift. In th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

2
333
0
5

Year Published

2019
2019
2019
2019

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 343 publications
(342 citation statements)
references
References 45 publications
2
333
0
5
Order By: Relevance
“…Mansour et al [28] claim that the target hypothesis can be represented by a weighted combination of source hypotheses. In the more applied works, Deep Cocktail Network (DCTN) [45] proposes a k-way domain discriminator and category classifier for digit classification and real-world object recognition. Hoffman et al [14] propose normalized solutions with theoretical guarantees for cross-entropy loss, aiming to provide a solution for the MSDA problem with very practical benefits.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Mansour et al [28] claim that the target hypothesis can be represented by a weighted combination of source hypotheses. In the more applied works, Deep Cocktail Network (DCTN) [45] proposes a k-way domain discriminator and category classifier for digit classification and real-world object recognition. Hoffman et al [14] propose normalized solutions with theoretical guarantees for cross-entropy loss, aiming to provide a solution for the MSDA problem with very practical benefits.…”
Section: Related Workmentioning
confidence: 99%
“…Ben-David et al [1] pioneer this direction by introducing an H∆H-divergence between the weighted combination of source domains and target domain. More applied works [6,45] use an adversarial discriminator to align the multi-source domains with the target domain. However, these works focus only on aligning the source domains with the target, neglecting the domain shift between the source domains.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Benchmarks. Digit-five [44] is composed of five domain sets drawn from mt (MNIST) [23], mm (MNIST-M) [10], sv(SVHN) [32], up (USPS) and sy (Synthetic Digits) [10], respectively. There are 25000 for training and 9000 for testing in mt, mm, sv, sy, while the entire USPS is chosen as a domain set up.…”
Section: Setupmentioning
confidence: 99%
“…In many applications multiple source domains may be available. This fact has motivated the study of multi-source DA algorithms [34,23]. In [34] an adversarial learning framework for multi-source DA is proposed, inspired by [10].…”
Section: Related Workmentioning
confidence: 99%