2019 IEEE/CVF International Conference on Computer Vision (ICCV) 2019
DOI: 10.1109/iccv.2019.00552
|View full text |Cite
|
Sign up to set email alerts
|

Multi-Garment Net: Learning to Dress 3D People From Images

Abstract: We present Multi-Garment Network (MGN), a method to predict body shape and clothing, layered on top of the SMPL [40] model from a few frames (1-8) of a video. Several experiments demonstrate that this representation allows higher level of control when compared to single mesh or voxel representations of shape. Our model allows to predict garment geometry, relate it to the body shape, and transfer it to new body shapes and poses. To train MGN, we leverage a digital wardrobe containing 712 digital garments in cor… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
134
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 326 publications
(150 citation statements)
references
References 64 publications
(75 reference statements)
0
134
0
Order By: Relevance
“…Even simple architectures have been shown to solve this task with a low error rate [15]. Recent advances are due to combining the 2D and 3D tasks into a joint estimation [4], [32], [33], [34] and using weakly [35], [36], [37], [38], [39] or self-supervised losses [40], [41], [42], [43], [44] or mixing 2D and 3D data for training [6], [42], [45], [46].…”
Section: Monocular 3d Pose Estimation From An External Cameramentioning
confidence: 99%
“…Even simple architectures have been shown to solve this task with a low error rate [15]. Recent advances are due to combining the 2D and 3D tasks into a joint estimation [4], [32], [33], [34] and using weakly [35], [36], [37], [38], [39] or self-supervised losses [40], [41], [42], [43], [44] or mixing 2D and 3D data for training [6], [42], [45], [46].…”
Section: Monocular 3d Pose Estimation From An External Cameramentioning
confidence: 99%
“…Note that the ClothCap method uses a sequence of scan data to increase the robustness of labelling, while ours uses only single scan data. As shown in Figure 16, the work of [BTTPM19] greatly improved the quality of the MRF-based segmentation by combining with semantic image segmentation. However, it still shows a few incomplete segmentations in the published dataset.…”
Section: Resultsmentioning
confidence: 99%
“…This method defines weak priors on the surface that are likely to belong to a certain class, and then solves the Markov Random Field (MRF) to perform the segmentation. The work of [BTTPM19] builds upon this approach by additionally incorporating image-based semantic segmentation [GLL*18] into the pipeline. The techniques mentioned above show overall robust segmentation in their applicable scopes; however, since they use independent per-vertex prediction, segmentation is oftentimes noisy, especially on the boundaries between classes.…”
Section: Garment Segmentation On 3d Human Scansmentioning
confidence: 99%
See 1 more Smart Citation
“…Physical simulation working within the fashion domain focus on clothing-body interactions, and datasets can be categorized into real data and created data. Despite the rapid revolution on previous datasets which are based on 2D images like DeepFashion [72], DeepFashion2 [77] and FashionAI [116], the production of datasets basing on 3D clothing is almost rare or not sufficient for training like the digital wardrobe released by MGN [117]. In 2020, Heming et al [118] develop a comprehensive dataset named Deep Fashion3D which is richly annotated and covers a much larger variations of garment styles.…”
Section: Video Sequencesmentioning
confidence: 99%