Proceedings of the 30th ACM International Conference on Multimedia 2022
DOI: 10.1145/3503161.3548270
|View full text |Cite
|
Sign up to set email alerts
|

Dynamic Weighted Semantic Correspondence for Few-Shot Image Generative Adaptation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 15 publications
0
4
0
Order By: Relevance
“…We detail the experimental settings in Section 4. (Zhao et al 2022b), RSSA (Xiao et al 2022), and DWSC (Hou et al 2022). We also include the latest finetuning-based methods AdAM (Zhao et al 2022a) and RICK (Zhao et al 2023).…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…We detail the experimental settings in Section 4. (Zhao et al 2022b), RSSA (Xiao et al 2022), and DWSC (Hou et al 2022). We also include the latest finetuning-based methods AdAM (Zhao et al 2022a) and RICK (Zhao et al 2023).…”
Section: Methodsmentioning
confidence: 99%
“…DCL (Zhao et al 2022b) proposed contrastive losses between source/target features of both generator and discriminator. DWSC (Hou et al 2022) designed perceptual/contextual loss respectively for easy/hard-to-generate patches. These methods kept the characteristics of the source domain by imposing strong regularization thus inherited the diversity from the source dataset.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…These include scale and shift parameters [32], updating only the higher discriminator layers [28], linear combinations of scale and shift parameters [41], modulating kernels or convolutions [59,58,10,2] and singular values [38], mapping networks from noise to latents [46,29,53] and latent offsets [12]. Various works apply regularization losses by enforcing constraints to samples/weights by the source generator including elastic weight regularization [27], domain correspondence [33,16,22], contrastive learning [60], spatial alignment [51], inversion [49,23,44], random masks on discriminators [61] and alignment free spatial correlation [30]. Given the increasing popularity of VQ-VAE and diffusion based models, recent work [43] and [61] explore few-shot finetuning on VQ-VAE tokens and diffusion models.…”
Section: Generative Transfermentioning
confidence: 99%