2022
DOI: 10.1016/j.neunet.2021.10.017
|View full text |Cite
|
Sign up to set email alerts
|

GuidedStyle: Attribute knowledge guided style manipulation for semantic face editing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
36
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4
4
1

Relationship

1
8

Authors

Journals

citations
Cited by 52 publications
(37 citation statements)
references
References 12 publications
1
36
0
Order By: Relevance
“…Various works have demonstrated that editing images via non-linear latent paths typically results in more faithful, disentangled edits [4,27]. Following these works, we now explore learning non-linear latent editing paths within the W+ latent space using the StyleCLIP mapper technique [48].…”
Section: Editing Via Non-linear Latent Pathsmentioning
confidence: 99%
“…Various works have demonstrated that editing images via non-linear latent paths typically results in more faithful, disentangled edits [4,27]. Following these works, we now explore learning non-linear latent editing paths within the W+ latent space using the StyleCLIP mapper technique [48].…”
Section: Editing Via Non-linear Latent Pathsmentioning
confidence: 99%
“…Most notably, the discovery of such interpretable directions have received much research attention with the advancement of StyleGAN. Recent works have sought to identify a semantically meaningful path in a supervised manner, which requires a large number of annotated images [2] or attribute predictors on the predefined semantics [17,36].…”
Section: Related Workmentioning
confidence: 99%
“…Furthermore, these works typically require manual tuning of hyperparameters such as manipulation strength and disentanglement magnitude. Other works take an alternative approach and train a network that predicts a per-image offset in the latent space for an intended edit [2,3,17,29]. These methods neither assume perfect disentanglement in the latent space, nor require manual tuning of the edit magnitude.…”
Section: Introductionmentioning
confidence: 99%
“…As may be expected, non-linear approaches can present higher quality editing at the cost of simplicity. Hou et al [2022], operate similarly to Yang et al [2020a] by using classifiers. However, they propose to move beyond global, linear directions and towards a non-linear traversal paradigm.…”
Section: Latent Space Editingmentioning
confidence: 99%