2020
DOI: 10.1007/978-3-030-58536-5_2
|View full text |Cite
|
Sign up to set email alerts
|

Transforming and Projecting Images into Class-Conditional Generative Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
52
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 76 publications
(53 citation statements)
references
References 34 publications
0
52
0
Order By: Relevance
“…The optimization is solved with Adam [17] in [20,39]. We truncate z to [−2, 2] as a typical practice when using BigGAN [4,16].…”
Section: Text-to-image Generation With Clip+ganmentioning
confidence: 99%
See 3 more Smart Citations
“…The optimization is solved with Adam [17] in [20,39]. We truncate z to [−2, 2] as a typical practice when using BigGAN [4,16].…”
Section: Text-to-image Generation With Clip+ganmentioning
confidence: 99%
“…In this work, we adopt the widely-used Adam [17] optimizer. Some recent works recommended to use gradient-free optimizers, like BasinCMA [3,16,32], for optimizing in GAN spaces due to the high non-convexity. However, our study shows that BasinCMA tends to cast a higher computation cost than Adam, because BasinCMA requires a Algorithm 1 FuseDream (with single image generation)…”
Section: Improving Optimizationmentioning
confidence: 99%
See 2 more Smart Citations
“…Recovered latent vectors can be used as a measure of evaluating GAN performance [2] as well as a method to find out about the features a GAN has learned from its training dataset. Moreover, it is possible to modify images towards a desired direction [3] [4] using recovered latent vectors, for example applying styles to human faces. In addition, linear operations on latent vectors result in meaningful changes in generated images [5].…”
Section: Introductionmentioning
confidence: 99%