2021 IEEE/CVF International Conference on Computer Vision (ICCV) 2021
DOI: 10.1109/iccv48922.2021.01389
|View full text |Cite
|
Sign up to set email alerts
|

Rethinking the Truly Unsupervised Image-to-Image Translation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
21
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 81 publications
(29 citation statements)
references
References 15 publications
0
21
0
Order By: Relevance
“…MX-Font [23] employs multiple encoders for each reference image with disentanglement between content and style which makes the cross-lingual task possible. DG-Font [31] is an unsupervised framework based on TUNIT [2] by replacing the traditional convolutional blocks with Deformable blocks which enables the model to perform better on cursive characters which are more difficult to generate.…”
Section: Few-shot Font Generationmentioning
confidence: 99%
“…MX-Font [23] employs multiple encoders for each reference image with disentanglement between content and style which makes the cross-lingual task possible. DG-Font [31] is an unsupervised framework based on TUNIT [2] by replacing the traditional convolutional blocks with Deformable blocks which enables the model to perform better on cursive characters which are more difficult to generate.…”
Section: Few-shot Font Generationmentioning
confidence: 99%
“…Image-to-Image (I2I) translation. Image-to-image translation aims to learn the style of images from a source domain into a target domain [32,20,36,1,38], which can be classified into two groups: the paired setting [19,19,35,29] (supervised) and an unpaired setting (unsupervised). Paired setting means the training set is supervised, each image from source domain has a corresponding label from target domain.…”
Section: Related Workmentioning
confidence: 99%
“…CUT proposes patch-wise contrastive learning by cropping input and output images into patches and maximizing the mutual information between patches. Following CUT, TUNIT [Baek et al 2021] adopts contrastive learning on images with similar semantic structures. However, the semantic similarity assumption does not hold for arbitrary style transfer tasks, which leads the learned style representations to a significant performance drop.…”
Section: Related Workmentioning
confidence: 99%