2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021
DOI: 10.1109/cvpr46437.2021.00229
|View full text |Cite
|
Sign up to set email alerts
|

TediGAN: Text-Guided Diverse Face Image Generation and Manipulation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
227
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 258 publications
(227 citation statements)
references
References 24 publications
0
227
0
Order By: Relevance
“…1 with a Lanvegin-like procedure where we take a gradient step with respect to the classifier probability and then correct this gradient step with the diffusion model. Unlike many GAN-based methods [12,69,93,43,94], D2C does not need to optimize an inversion procedure at evaluation time, and thus the latent value is much faster to compute; D2C is also better at retaining fine-grained features of the original image due to the reconstruction loss.…”
Section: Conditions From Labeled Examplesmentioning
confidence: 99%
See 2 more Smart Citations
“…1 with a Lanvegin-like procedure where we take a gradient step with respect to the classifier probability and then correct this gradient step with the diffusion model. Unlike many GAN-based methods [12,69,93,43,94], D2C does not need to optimize an inversion procedure at evaluation time, and thus the latent value is much faster to compute; D2C is also better at retaining fine-grained features of the original image due to the reconstruction loss.…”
Section: Conditions From Labeled Examplesmentioning
confidence: 99%
“…To perform conditional generation over an unconditional LVGM, most methods assume access to a discriminative model (e.g., a classifier); the latent space of the generator is then modified to change the outputs of the discriminative model. The disciminative model can operate on either the image space [63,67,25] or the latent space [77,94]. For image space discriminative models, plug-and-play generative networks [63] control the attributes of generated images via Langevin dynamics [75]; these ideas are also explored in diffusion models [83].…”
Section: Conditional Generation With Unconditional Modelsmentioning
confidence: 99%
See 1 more Smart Citation
“…Shen et al [51] perform eigenvalue decomposition on the affine transformation layers of StyleGAN2 generators [28] to learn versatile manipulation directions. Xia et al [59] and Patashnik and Wu et al [44] manipulate images using a human-understandable text prompt providing a more intuitive image editing interface.…”
Section: Gan-based Image Editingmentioning
confidence: 99%
“…With rule-based instructions and predefined semantic labels, they [17,18] first carry out LBIE but under limited practicality. Inspired by textto-image generation [19,20,21], previous works [22,23,24,25,26,27,30] perform LBIE by conditional GAN. Following multi-turn manipulation from humans, iterative LBIE (ILBIE) [28,29] edits images step-by-step.…”
Section: Language-based Image Editingmentioning
confidence: 99%