2017
DOI: 10.48550/arxiv.1705.07904
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Semantically Decomposing the Latent Spaces of Generative Adversarial Networks

Abstract: We propose a new algorithm for training generative adversarial networks that jointly learns latent codes for both identities (e.g. individual humans) and observations (e.g. specific photographs). By fixing the identity portion of the latent codes, we can generate diverse images of the same subject, and by fixing the observation portion, we can traverse the manifold of subjects while maintaining contingent aspects such as lighting and pose. Our algorithm features a pairwise training scheme in which each sample … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
24
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
5
1
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 19 publications
(24 citation statements)
references
References 17 publications
0
24
0
Order By: Relevance
“…In this way, face images of new subjects can be generated by fixing the identity component z id and varying the non-identity component z nid . The method most closely related to ours is the semantically decomposed GAN (SD-GAN) proposed in [46]. However, in contrast with SD-GAN, our method also supports the generation of face images of subjects that exist in the training set.…”
Section: Introductionmentioning
confidence: 90%
See 1 more Smart Citation
“…In this way, face images of new subjects can be generated by fixing the identity component z id and varying the non-identity component z nid . The method most closely related to ours is the semantically decomposed GAN (SD-GAN) proposed in [46]. However, in contrast with SD-GAN, our method also supports the generation of face images of subjects that exist in the training set.…”
Section: Introductionmentioning
confidence: 90%
“…However, this approach does not allow generation of samples of new subjects. As far as we are aware, the SD-GAN method proposed in [46] is the only work that has attempted to solve this task. SD-GANs split the vector of latent variables z into two components z I and z O encoding identity-related attributes and nonidentity-related attributes respectively.…”
Section: Related Workmentioning
confidence: 99%
“…Compared to PATE-GAN and other work which involves conditional GANs such as Odena et al [14] and Trigueros et al [20], our work has the added constraint that for our synthetic dataset we would like to create not just synthetic samples but also synthetic classes (our identities). This issue has been explored by Donahue et al [4] who propose Semantically Decomposed GANs (SD-GANs) which encourage the disentanglement of the latent space of GANs. Their GAN is trained using a latent code z decomposed into z 1 and z 2 .…”
Section: Related Workmentioning
confidence: 99%
“…The model we consider is similar to the work of Donahue et al [4] in their SD-GANs, but applied to StyleGAN2. We begin with a generator G that takes two input latent codes, z 1 ∈ R 512 and z 2 ∈ R 512 , both drawn from a standard normal distribution.…”
Section: Synthetic Data Set Generationmentioning
confidence: 99%
“…In other words, the editing should change the target attribute but keep other information ideally unchanged. To achieve this effect, methods are mainly focused on three aspects: the comprehensive design of loss functions [6,29,36], the involvement of additional attribute features [35,1,22,43] and the architecture designs [33,10,24,45,7]. However, these works either discard the latent code, resulting in the inability to continuously interpolate certain semantics, or fail to provide the same synthesis quality compared with state-ofthe-art GANs [17,18].…”
Section: Exploring Latent Space Representations In Gansmentioning
confidence: 99%