“…These include scale and shift parameters [32], updating only the higher discriminator layers [28], linear combinations of scale and shift parameters [41], modulating kernels or convolutions [59,58,10,2] and singular values [38], mapping networks from noise to latents [46,29,53] and latent offsets [12]. Various works apply regularization losses by enforcing constraints to samples/weights by the source generator including elastic weight regularization [27], domain correspondence [33,16,22], contrastive learning [60], spatial alignment [51], inversion [49,23,44], random masks on discriminators [61] and alignment free spatial correlation [30]. Given the increasing popularity of VQ-VAE and diffusion based models, recent work [43] and [61] explore few-shot finetuning on VQ-VAE tokens and diffusion models.…”