Special Interest Group on Computer Graphics and Interactive Techniques Conference Conference Proceedings 2023
DOI: 10.1145/3588432.3591506
|View full text |Cite
|
Sign up to set email alerts
|

Key-Locked Rank One Editing for Text-to-Image Personalization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

1
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 30 publications
(3 citation statements)
references
References 4 publications
1
2
0
Order By: Relevance
“…This occurs because the method primarily emphasizes the reconstruction loss while disregarding the compositional aspect of the target concept in relation to others. Similar findings are reported in (Tewel et al 2023) which suggests the dominance of the inverted concepts in the generation process encroaches upon the spotlight of other concepts. However, this is simply attributed to an over-fitting problem, with underlying rationale remaining unexplored.…”
Section: Introductionsupporting
confidence: 84%
See 1 more Smart Citation
“…This occurs because the method primarily emphasizes the reconstruction loss while disregarding the compositional aspect of the target concept in relation to others. Similar findings are reported in (Tewel et al 2023) which suggests the dominance of the inverted concepts in the generation process encroaches upon the spotlight of other concepts. However, this is simply attributed to an over-fitting problem, with underlying rationale remaining unexplored.…”
Section: Introductionsupporting
confidence: 84%
“…However, the remaining methods, although employing similar approaches of searching for inverted embeddings, rely on either retraining or fine-tuning for this purpose. For instance, DreamBooth (Ruiz et al 2023) retrains the entire Imagen for constructing embeddings for CoI, while Custom Diffusion (Kumari et al 2023), Perfusion (Tewel et al 2023), SVDiff (Han et al 2023), and Cones (Liu et al 2023) only fine-tune partial parameters of the Stable Diffusion model. To mitigate language drift and overfitting problems, a large number of images from the same CoI class are typically utilized as regularization during the training/fine-tuning process.…”
Section: Inversion For Customization and Personalizationmentioning
confidence: 99%
“…SVDiff [17] and Lightweight DreamBooth [40] further refine this process using Singular Value Decomposition (SVD) and orthogonal incomplete basis within LoRA weight-space respectively. Perfusion [45] incorporates Rank-One Model Editing (ROME) [30], exemplifies targeted model edits aligned with conceptual directions. Meanwhile, HyperDreamBooth [40] enables rapid adaptation to new concepts through hypernetwork-initialized rank-1 residuals.…”
Section: T2i Personalizationmentioning
confidence: 99%