2022
DOI: 10.48550/arxiv.2205.06032
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

D3T-GAN: Data-Dependent Domain Transfer GANs for Few-shot Image Generation

Abstract: As an important and challenging problem, few-shot image generation aims at generating realistic images through training a GAN model given few samples. A typical solution for few-shot generation is to transfer a well-trained GAN model from a data-rich source domain to the data-deficient target domain. In this paper, we propose a novel self-supervised transfer scheme termed D 3 T-GAN, addressing the cross-domain GANs transfer in few-shot image generation. Specifically, we design two individual strategies to tran… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 33 publications
(61 reference statements)
0
1
0
Order By: Relevance
“…These include scale and shift parameters [32], updating only the higher discriminator layers [28], linear combinations of scale and shift parameters [41], modulating kernels or convolutions [59,58,10,2] and singular values [38], mapping networks from noise to latents [46,29,53] and latent offsets [12]. Various works apply regularization losses by enforcing constraints to samples/weights by the source generator including elastic weight regularization [27], domain correspondence [33,16,22], contrastive learning [60], spatial alignment [51], inversion [49,23,44], random masks on discriminators [61] and alignment free spatial correlation [30]. Given the increasing popularity of VQ-VAE and diffusion based models, recent work [43] and [61] explore few-shot finetuning on VQ-VAE tokens and diffusion models.…”
Section: Generative Transfermentioning
confidence: 99%
“…These include scale and shift parameters [32], updating only the higher discriminator layers [28], linear combinations of scale and shift parameters [41], modulating kernels or convolutions [59,58,10,2] and singular values [38], mapping networks from noise to latents [46,29,53] and latent offsets [12]. Various works apply regularization losses by enforcing constraints to samples/weights by the source generator including elastic weight regularization [27], domain correspondence [33,16,22], contrastive learning [60], spatial alignment [51], inversion [49,23,44], random masks on discriminators [61] and alignment free spatial correlation [30]. Given the increasing popularity of VQ-VAE and diffusion based models, recent work [43] and [61] explore few-shot finetuning on VQ-VAE tokens and diffusion models.…”
Section: Generative Transfermentioning
confidence: 99%