2020
DOI: 10.1007/978-3-030-58545-7_19
|View full text |Cite
|
Sign up to set email alerts
|

Contrastive Learning for Unpaired Image-to-Image Translation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
907
0
2

Year Published

2020
2020
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 842 publications
(1,005 citation statements)
references
References 65 publications
2
907
0
2
Order By: Relevance
“…Generative Adversarial Networks (GANs) [ 41 ] have been successfully deployed for generating realistic images; in particular, Pix2Pix [ 42 ], CycleGAN [ 43 ], and CUT [ 44 ] have been shown to produce promising results in translating images from one domain to another. Zhang et al introduced a Pix2Pix-based approach that focused on achieving a high face recognition accuracy of their generated visible images by incorporating an explicit closed-set face recognition loss [ 45 ].…”
Section: Resultsmentioning
confidence: 99%
“…Generative Adversarial Networks (GANs) [ 41 ] have been successfully deployed for generating realistic images; in particular, Pix2Pix [ 42 ], CycleGAN [ 43 ], and CUT [ 44 ] have been shown to produce promising results in translating images from one domain to another. Zhang et al introduced a Pix2Pix-based approach that focused on achieving a high face recognition accuracy of their generated visible images by incorporating an explicit closed-set face recognition loss [ 45 ].…”
Section: Resultsmentioning
confidence: 99%
“…Based on that assumption, they propose the geometry consistency to achieve one-sided image translation. CUT [16] introduces contrastive strategy on patch-level to preserve image contents in place of cycle consistency. However, these methods still consider stylization as a global task as they focus on migrating styles or attributes onto the entire images.…”
Section: Image Translation Without Cycle Consistencymentioning
confidence: 99%
“…For the generation we use two metrics that measure slightly different aspects, the Fréchet Inception Distance (FID) and the Structure Similarity Index Measure (SSIM). These have been applied in works of similar context [3,7,11,12,16,18,[24][25][26]. For semantic segmentation purposes we opt for the DICE score.…”
Section: Evaluation Metricsmentioning
confidence: 99%
“…This framework was used for the synthesis of abnormal brain MRI images from annotations with a tumor label [14], and for the generation of retinal images from vessel masks [15]. A more recent work based on Contrastive Learning [16] has tried to both improve and generalize the image synthesis approach by disregarding the bijective assumption implicit in works using cycle-consistency and still avoid tailoring a loss function to the setup.…”
Section: Introductionmentioning
confidence: 99%