Proceedings of the 26th ACM International Conference on Multimedia 2018
DOI: 10.1145/3240508.3240716
|View full text |Cite
|
Sign up to set email alerts
|

Crossing-Domain Generative Adversarial Networks for Unsupervised Multi-Domain Image-to-Image Translation

Abstract: State-of-the-art techniques in Generative Adversarial Networks (GANs) have shown remarkable success in image-to-image translation from peer domain X to domain Y using paired image data. However, obtaining abundant paired data is a non-trivial and expensive process in the majority of applications. When there is a need to translate images across n domains, if the training is performed between every two domains, the complexity of the training will increase quadratically. Moreover, training with data from two doma… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
17
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 35 publications
(17 citation statements)
references
References 13 publications
(30 reference statements)
0
17
0
Order By: Relevance
“…CD-GAN [33], RG-UNIT [34] and MUNIT [5] are proposed for multi-modal unsupervised image-to-image translation. They all use a latent-code-sharing assumption.…”
Section: Multi-domain Image-to-image Synthesismentioning
confidence: 99%
See 1 more Smart Citation
“…CD-GAN [33], RG-UNIT [34] and MUNIT [5] are proposed for multi-modal unsupervised image-to-image translation. They all use a latent-code-sharing assumption.…”
Section: Multi-domain Image-to-image Synthesismentioning
confidence: 99%
“…They all use a latent-code-sharing assumption. CD-GAN [33] learns high-level features across different domains and can generate diverse and realistic results. RG-UNIT [34] takes the power of a retrieval system to complete translation.…”
Section: Multi-domain Image-to-image Synthesismentioning
confidence: 99%
“…A different line of work is multi-domain image-to-image translation [5,2,52]: here, the same model can be used for translating images according to multiple attributes (i.e., hair color, gender or age). Other methods, instead, focus on diverse image-to-image translation, in which an image can be translated in multiple ways by encoding different style properties of the target distribution [57,15,24].…”
Section: Related Workmentioning
confidence: 99%
“…DualGAN [24] suggested a similar method to CycleGAN but used a Wasserstein loss [25] instead of the least-square loss. UNIT [26] and CD-GAN [27] used a shared latent space and cycle consistency for better quality and accuracy.…”
Section: Gan-based Image-to-image Translationmentioning
confidence: 99%