2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition 2018
DOI: 10.1109/cvpr.2018.00845
|View full text |Cite
|
Sign up to set email alerts
|

From Source to Target and Back: Symmetric Bi-Directional Adaptive GAN

Abstract: The effectiveness of GANs in producing images according to a specific visual domain has shown potential in unsupervised domain adaptation. Source labeled images have been modified to mimic target samples for training classifiers in the target domain, and inverse mappings from the target to the source domain have also been evaluated, without new image generation.In this paper we aim at getting the best of both worlds by introducing a symmetric mapping among domains. We jointly optimize bi-directional image tran… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

3
177
0

Year Published

2018
2018
2020
2020

Publication Types

Select...
6
3

Relationship

1
8

Authors

Journals

citations
Cited by 228 publications
(180 citation statements)
references
References 25 publications
3
177
0
Order By: Relevance
“…Along with standard GAN losses, they introduced the cycle loss where generators minimize the reconstruction loss. [35] proposed to modify the consistency loss so that the label of the reconstructed image is preserved, instead of the image itself. [30] combined several of these reconstruction losses.…”
Section: Discussion and Related Workmentioning
confidence: 99%
“…Along with standard GAN losses, they introduced the cycle loss where generators minimize the reconstruction loss. [35] proposed to modify the consistency loss so that the label of the reconstructed image is preserved, instead of the image itself. [30] combined several of these reconstruction losses.…”
Section: Discussion and Related Workmentioning
confidence: 99%
“…Thus, object detectors adapted at the feature-level are at risk of the source-biased discriminativity and it can leads to false recognition on the target domain. On the other hand, pixel-level adaptation methods [36,1,17] focus on visual appearance translation toward the opposite domain. The model can then take advantage of the information from the translated source images [17,1] or infer pseudo label of the translated target images [22].…”
Section: Introductionmentioning
confidence: 99%
“…The model can then take advantage of the information from the translated source images [17,1] or infer pseudo label of the translated target images [22]. Most existing pixel-level adaptation methods [36,1,17] are based on the assumption that the image translator can perfectly convert one domain to the opposite domain such that the translated images can be regarded as those from the opposite domain. However, these methods reveal imperfect translation in many adaptation cases since the performance of the translator heavily depends on the appearance gap between the source and the target domain, as shown in Fig.…”
Section: Introductionmentioning
confidence: 99%
“…There are two different philosophies in which domain adaptation is typically attacked: (i) Domain Transformation: To build a transformation from target data to source domain and reuse the source feature extractor and classifier (x t → x s ). Consider the GANbased methods [1,21]. These work on the input-level and transform samples from the target domain to mimic distributions of source domains.…”
Section: Introductionmentioning
confidence: 99%