2019 International Conference on 3D Vision (3DV) 2019
DOI: 10.1109/3dv.2019.00076
|View full text |Cite
|
Sign up to set email alerts
|

360-Degree Textures of People in Clothing from a Single Image

Abstract: DensePose Garment segmentation Partial texture Completed texture Partial segmentation Completed segmentation Displacement maps Input view Fully-textured 3D avatar Figure 1: Given a single view of a person we predict a complete texture map in the UV space, complete clothing segmentation as well as a displacement map for the SMPL model [41], which we then combine to obtain a fully-textured 3D avatar. AbstractIn this paper we predict a full 3D avatar of a person from a single image. We infer texture and geometry … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
110
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 142 publications
(110 citation statements)
references
References 83 publications
(118 reference statements)
0
110
0
Order By: Relevance
“…There were some other approaches in which various cues were used for building sufficient loss function to train the network including the mesh [31], the texture [44], the multi-view images [34], the optimized SMPL model [30] and the video [27,29]. In order to model the detailed appearance, some method attempt to refine the regressed SMPL model to obtain the detailed 3D model [1,3,23,32,42,53,61,62]. In [1,3,32], after estimating the pose and shape of SMPL model, the authors used shape from shading and texture translation to add the details to SMPL like face, hairstyle, and clothes with garment wrinkles.…”
Section: Parametric Human Body Model Based Regressionmentioning
confidence: 99%
See 1 more Smart Citation
“…There were some other approaches in which various cues were used for building sufficient loss function to train the network including the mesh [31], the texture [44], the multi-view images [34], the optimized SMPL model [30] and the video [27,29]. In order to model the detailed appearance, some method attempt to refine the regressed SMPL model to obtain the detailed 3D model [1,3,23,32,42,53,61,62]. In [1,3,32], after estimating the pose and shape of SMPL model, the authors used shape from shading and texture translation to add the details to SMPL like face, hairstyle, and clothes with garment wrinkles.…”
Section: Parametric Human Body Model Based Regressionmentioning
confidence: 99%
“…In order to model the detailed appearance, some method attempt to refine the regressed SMPL model to obtain the detailed 3D model [1,3,23,32,42,53,61,62]. In [1,3,32], after estimating the pose and shape of SMPL model, the authors used shape from shading and texture translation to add the details to SMPL like face, hairstyle, and clothes with garment wrinkles. In addition, the explicit representation of 3D human body model were also used in detailed reconstruction.…”
Section: Parametric Human Body Model Based Regressionmentioning
confidence: 99%
“…However, the active IR-based depth sensors are unsuitable for outdoor capture, and their high power consumption limits the mobile application. Recently, with the advent of deep neural networks, purely RGB-based monocular methods have been proposed to encode various prior information of human models such as motion [23,24,50], geometry [20,28,36,37], garment [8] or appearance [25]. However, such methods still relies on per-vertex [1,[39][40][41] bring huge potential to enable more realistic 2D rendering results in the novel views.…”
Section: Related Workmentioning
confidence: 99%
“…Similarly it is possible to register two models using manifold‐harmonics based non rigid registration [LRB*16], but this would not help for skeleton‐based registration. Learning approaches can also work from a single image [LIPM19, AMB *19].…”
Section: Related Workmentioning
confidence: 99%