2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021
DOI: 10.1109/cvpr46437.2021.01611
|View full text |Cite
|
Sign up to set email alerts
|

DivCo: Diverse Conditional Image Synthesis via Contrastive Generative Adversarial Network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
28
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
5

Relationship

0
10

Authors

Journals

citations
Cited by 66 publications
(28 citation statements)
references
References 19 publications
0
28
0
Order By: Relevance
“…Due to the constant evolution of GANs during the last few years, these reviews are outdated almost instantaneously. As a result of some relevant and recent researches like [68,176,91] cannot be found in any recent GAN review [3,30]. We consider that a new and more complete review must be done, covering the researches that previous reviews did not fill in and contributing to a deeper and more thorough analysis of the state-of-the-art of GANs.…”
Section: Related Workmentioning
confidence: 99%
“…Due to the constant evolution of GANs during the last few years, these reviews are outdated almost instantaneously. As a result of some relevant and recent researches like [68,176,91] cannot be found in any recent GAN review [3,30]. We consider that a new and more complete review must be done, covering the researches that previous reviews did not fill in and contributing to a deeper and more thorough analysis of the state-of-the-art of GANs.…”
Section: Related Workmentioning
confidence: 99%
“…Take H → N H as an example, given a query image q generated from a latent code, we extract feature representations for generated images, i.e, f = E L (G enc (z, c)). We wish the same factor values c produce the similar image features f , even match with various z, and vice versa [27,34]. Here, we denote the corresponding similar feature as "positive" f + = E L (G enc (z + , c + )) and dissimilar features as "negatives"…”
Section: Adversarial Contrastive Lossmentioning
confidence: 99%
“…Recent years have witnessed the rapid development of image generation supported by a bunch of generative adversarial network (GAN) [9] based methods [1,11,27,29,31,34,35,41,44]. Compared with previous approaches [22,42], GAN-based methods model the domain-specific data distributions better through the specific adversarial training paradigm, i.e., a discriminator is trained to distinguish whether the images are true or false for the optimization of the generator.…”
Section: Related Workmentioning
confidence: 99%