2019 IEEE Winter Conference on Applications of Computer Vision (WACV) 2019
DOI: 10.1109/wacv.2019.00067
|View full text |Cite
|
Sign up to set email alerts
|

Multi-Component Image Translation for Deep Domain Generalization

Abstract: Domain adaption (DA) and domain generalization (DG) are two closely related methods which are both concerned with the task of assigning labels to an unlabeled data set. The only dissimilarity between these approaches is that DA can access the target data during the training phase, while the target data is totally unseen during the training phase in DG. The task of DG is challenging as we have no earlier knowledge of the target samples. If DA methods are applied directly to DG by a simple exclusion of the targe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
29
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2
1

Relationship

2
6

Authors

Journals

citations
Cited by 50 publications
(29 citation statements)
references
References 32 publications
0
29
0
Order By: Relevance
“…For domain generalization, the training data always contains more than one source domain. Most of the existing domain generalization methods [39,20,22,38] split the source data as 70% -30%…”
Section: Correlation-aware Adversarial Domain Generalization (Caadg)mentioning
confidence: 99%
“…For domain generalization, the training data always contains more than one source domain. Most of the existing domain generalization methods [39,20,22,38] split the source data as 70% -30%…”
Section: Correlation-aware Adversarial Domain Generalization (Caadg)mentioning
confidence: 99%
“…Generally, GANs use noise to synthesise an image and the network is trained in an adversarial manner. Inspired by the success of GANs, various extensions have been proposed for image-to-image translation [13,28], pixel-level transfer from source to target domains [43], and style transfer between domains [42]. Rather than using noise alone as the stimulus, the conditional GAN (cGAN) [17] is proposed to control the mode of generated images, however, cGANs need a pair of images for training which is not available for many tasks.…”
Section: Image Domain Translation By Generative Adversarial Network (Gans)mentioning
confidence: 99%
“…Because target domain data are totally "invisible", there is no way to estimate the target distribution or to minimize the domain shift when developing the model. Therefore, most domain generalization methods [3,7,13,21,25,27,32] rely on learning a domain-invariant representation from multiple source domains so as to adapt or to generalize to the unseen target domain. Some methods [14,27] rely on image translation techniques by transferring images from one domain to the other and then use the transferred images as augmented training data to improve the model generalization.…”
Section: Introductionmentioning
confidence: 99%
“…Therefore, most domain generalization methods [3,7,13,21,25,27,32] rely on learning a domain-invariant representation from multiple source domains so as to adapt or to generalize to the unseen target domain. Some methods [14,27] rely on image translation techniques by transferring images from one domain to the other and then use the transferred images as augmented training data to improve the model generalization. Other methods [13,21,22,27] focus on learning a generalized representation by minimizing discrepancy among source domains.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation