2021
DOI: 10.1145/3478513.3480559
|View full text |Cite
|
Sign up to set email alerts
|

Pose with style

Abstract: We present an algorithm for re-rendering a person from a single image under arbitrary poses. Existing methods often have difficulties in hallucinating occluded contents photo-realistically while preserving the identity and fine details in the source image. We first learn to inpaint the correspondence field between the body surface texture and the source image with a human body symmetry prior. The inpainted correspondence field allows us to transfer/warp local features extracted from the source to the target vi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
19
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 75 publications
(19 citation statements)
references
References 53 publications
0
19
0
Order By: Relevance
“…Similar to previous methods [LIP19; APTM19; GSVL19; ALY*21], we map the source image into the UV space of the SMPL model. We denote the mapped partial texture as T source .…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Similar to previous methods [LIP19; APTM19; GSVL19; ALY*21], we map the source image into the UV space of the SMPL model. We denote the mapped partial texture as T source .…”
Section: Methodsmentioning
confidence: 99%
“…These methods reported better outcomes in retaining the local details presented in the source image compared to the methods [NGK18] that directly utilize the color pixels of the source image. Similar to these approaches [GSVL19; ALY*21], we use a sampler network to sample the visible regions in the source image to create the texture map. Because the texture map is used for intermediate representation, it is prone to artifacts such as stretching‐out or distortion when directly used for rendering as shown in Figure 5.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Albahar et al [ALY*21] suggest spatial control through the initial input constant, while leveraging the inherent semantic understanding StyleGAN naturally develops, reinforced by human pose labeling. Unlike most editing works, which manipulate the behavior of a pretrained StyleGAN, this work proposes architectural changes to the generator, to adapt it to human pose inputs.…”
Section: Latent Space Editingmentioning
confidence: 99%
“…While all the aforementioned works showed incredible results and promise in real‐world scenarios, they are limited in the domains they operate over. Some works have explored going beyond the facial domain and have explored applying StyleGAN for full‐body synthesis in various applications such as virtual try‐on and portrait reposing [LVKS21,ALY*21].…”
Section: Encoding and Inversionmentioning
confidence: 99%