SIGGRAPH Asia 2022 Conference Papers 2022
DOI: 10.1145/3550469.3555382
|View full text |Cite
|
Sign up to set email alerts
|

Stitch it in Time: GAN-Based Facial Editing of Real Videos

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 47 publications
(24 citation statements)
references
References 31 publications
0
13
0
Order By: Relevance
“…As we demonstrate in the experiments, previous approaches have difficulty handling these challenging cases. With our GAN inversion, we can modify the latent code to perform high-quality semantic image editing [18,24,40,45] or video editing [53,58,60].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…As we demonstrate in the experiments, previous approaches have difficulty handling these challenging cases. With our GAN inversion, we can modify the latent code to perform high-quality semantic image editing [18,24,40,45] or video editing [53,58,60].…”
Section: Related Workmentioning
confidence: 99%
“…By changing the latent code, one can achieve many creative semantic editing effects [18,24,40,45] for images. For the video domain, recent methods also achieve faithful and temporally consistent editing [53,58,60]. However, these 2D methods often lack explicit viewpoint control of the generated contents.…”
Section: Introductionmentioning
confidence: 99%
“…Other methods combine the optimization and encoder approaches and propose hybrid strategies by using the encoder for initialization and refining the latent code by optimization [35,95]. Recent 2D GAN inversion methods achieve faithful reconstruction with high editing capabilities and have been extended for video editing [2,75,88]. However, editing 3D-related attributes such as camera parameters and head pose remains inconsistent and prone to severe flickering as the pre-trained generator is unaware of the 3D structure.…”
Section: Gan Inversion Gan Inversion Maps a Real Image Backmentioning
confidence: 99%
“…Specifically, video inversion should be temporally consistent. Tzaban et al [TMG*22] demonstrate that by combining encoders [TAN*21] with generator tuning techniques [RMBCO21], the consistency of the original video can be maintained. Another challenge can be found in the texture‐sticking phenomenon observed in StyleGAN1 and StyleGAN2 [KAL*21], which hinders the realism of generated and manipulated videos.…”
Section: Encoding and Inversionmentioning
confidence: 99%