2022
DOI: 10.48550/arxiv.2208.08840
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Mind the Gap in Distilling StyleGANs

Abstract: StyleGAN family is one of the most popular Generative Adversarial Networks (GANs) for unconditional generation. Despite its impressive performance, its high demand on storage and computation impedes their deployment on resource-constrained devices. This paper provides a comprehensive study of distilling from the popular StyleGANlike architecture. Our key insight is that the main challenge of Style-GAN distillation lies in the output discrepancy issue, where the teacher and student model yield different outputs… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(5 citation statements)
references
References 28 publications
0
5
0
Order By: Relevance
“…logits and features, from teacher networks to student network, which is originally used in classification [10,37,43,55,49,62]. Following, KD is also employed for GAN-based model compression [9,52,30,32,54,25]. In GAN compression [30], the student network learns every intermediate features and final outputs from the teacher network.…”
Section: Knowledge Distillation Of Gansmentioning
confidence: 99%
See 4 more Smart Citations
“…logits and features, from teacher networks to student network, which is originally used in classification [10,37,43,55,49,62]. Following, KD is also employed for GAN-based model compression [9,52,30,32,54,25]. In GAN compression [30], the student network learns every intermediate features and final outputs from the teacher network.…”
Section: Knowledge Distillation Of Gansmentioning
confidence: 99%
“…CAGAN [32] adopts multi-Level distillation and content-aware distillation, then fine-tuned by adversarial loss. StyleKD [54] proposes that the mapping network plays an important role in generation. In addition, a novel initialization strategy and a latent direction-based distillation loss are presented for semantic consistency between the teacher and student model.…”
Section: Knowledge Distillation Of Gansmentioning
confidence: 99%
See 3 more Smart Citations