2019
DOI: 10.48550/arxiv.1901.04530
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

CrossNet: Latent Cross-Consistency for Unpaired Image Translation

Abstract: Here, a beautiful smile. It makes a difference… Figure 1: Given two unpaired sets of images, we train a model to perform translation between the two sets. Here we show, from left to right, our results on changing a specular material to diffuse, enhancing a mobile phone image to look like one taken by a Digital SLR camera and foreground extraction.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 18 publications
0
1
0
Order By: Relevance
“…Latent Space Constraint. Recent works have also explored the latent space by 1) introducing the feature cycleconsistency, i.e., minimizing the distance between the latent features of the real images and the generated images, such as the f -constancy term introduced in DTN [26], the latent reconstruction loss in MUNIT [13] and the latent cycleconsistency in XNet [24]; 2) assuming a shared latent (content) space so that the latent (content) features of the corresponding paired images are the same, e.g., UNIT [21] and MUNIT [13]. In contrast to the above prior works which introduce latent space constraint implicitly over the image level, the key idea of ours is to explicitly learn the translation model over the feature space, whose dimension is much lower than the image space, leading to the ease of model training and better translations.…”
Section: Related Workmentioning
confidence: 99%
“…Latent Space Constraint. Recent works have also explored the latent space by 1) introducing the feature cycleconsistency, i.e., minimizing the distance between the latent features of the real images and the generated images, such as the f -constancy term introduced in DTN [26], the latent reconstruction loss in MUNIT [13] and the latent cycleconsistency in XNet [24]; 2) assuming a shared latent (content) space so that the latent (content) features of the corresponding paired images are the same, e.g., UNIT [21] and MUNIT [13]. In contrast to the above prior works which introduce latent space constraint implicitly over the image level, the key idea of ours is to explicitly learn the translation model over the feature space, whose dimension is much lower than the image space, leading to the ease of model training and better translations.…”
Section: Related Workmentioning
confidence: 99%