Special Interest Group on Computer Graphics and Interactive Techniques Conference Proceedings 2022
DOI: 10.1145/3528233.3530708
|View full text |Cite
|
Sign up to set email alerts
|

Self-Distilled StyleGAN: Towards Generation from Internet Photos

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
11
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 26 publications
(11 citation statements)
references
References 15 publications
0
11
0
Order By: Relevance
“…Bau et al [7] further demonstrated how to use masks provided by the user, to localize the text-based editing and restrict the change to a specific spatial region. However, while GAN-based image editing approaches succeed on highly-curated datasets [27], e.g., human faces, they struggle over large and diverse datasets.…”
Section: Fixed Attention Maps and Random Seedmentioning
confidence: 99%
“…Bau et al [7] further demonstrated how to use masks provided by the user, to localize the text-based editing and restrict the change to a specific spatial region. However, while GAN-based image editing approaches succeed on highly-curated datasets [27], e.g., human faces, they struggle over large and diverse datasets.…”
Section: Fixed Attention Maps and Random Seedmentioning
confidence: 99%
“…StyleGAN-XL [147], for example, scales StyleGAN to large datasets with multiple classes of objects. Self-Distilled StyleGAN [148] aims to use internet photo collections for training. These efforts can enable 3D-aware image synthesis for real-world scenarios, but the attendant computational complexity will need to be addressed.…”
Section: Discussionmentioning
confidence: 99%
“…Generative model zoo. We evaluate on a collection of 133 generative models trained using different techniques including GANs [34,54,56,58,63,74,79,92,92,107,122], diffusion models [24,44,116], MLP-based generative model CIPS [3], and the autoregressive model VQGAN [30]. For evauation we also manually assign ground truth label to each model based on the type of generated images, with 23 labels in total.…”
Section: Methodsmentioning
confidence: 99%
“…Generative models are opensourced at an unprecedented rate of hundreds per month. They use different learning objectives [25,36,44,48,60,85,114,117], training techniques [54,55,63,79,100,106], and network architectures [13,30,57,97]. They are also trained on different datasets [20,79,108,129] for different applications [2,39,65,90,96,103,133,139].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation