2019
DOI: 10.1145/3355089.3356511
|View full text |Cite
|
Sign up to set email alerts
|

Dynamic hair modeling from monocular videos using deep neural networks

Abstract: We introduce a deep learning based framework for modeling dynamic hairs from monocular videos, which could be captured by a commodity video camera or downloaded from Internet. The framework mainly consists of two neural networks, i.e., HairSpatNet for inferring 3D spatial features of hair geometry from 2D image features, and HairTempNet for extracting temporal features of hair motions from video frames. The spatial features are represented as 3D occupancy fields … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
27
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 30 publications
(27 citation statements)
references
References 29 publications
0
27
0
Order By: Relevance
“…Recently deep learning-based methods have been successfully applied for 3D animations of human faces [27], [28], hair [29], [30] and garments [5], [31]. As for garment synthesis, some approaches [4], [23], [32] are proposed to utilize a two-stream strategy consisting of global garment fit and local wrinkle enhancement.…”
Section: Cloth Animationmentioning
confidence: 99%
“…Recently deep learning-based methods have been successfully applied for 3D animations of human faces [27], [28], hair [29], [30] and garments [5], [31]. As for garment synthesis, some approaches [4], [23], [32] are proposed to utilize a two-stream strategy consisting of global garment fit and local wrinkle enhancement.…”
Section: Cloth Animationmentioning
confidence: 99%
“…Dynamic Hair Capture. Compared to the vast body of work on hair geometry acquisition, the work on hair dynamics [11,70,72,76] acquisition is much less. Zhang et al [76] uses hair simulation to enforce better temporal consistency over a per-frame hair reconstruction result.…”
Section: Related Workmentioning
confidence: 99%
“…Xu et al [70] performs visual tracking by aligning per-frame reconstruction of hair strands with motion paths of hair strands on a horizontal slice of a video volume. Yang et al [72] developed a deep learning framework for hair tracking using indirect supervision from 2D hair segmentation and a digital 3D hair dataset. However those methods mainly focus on geometry modeling and are not photometrically accurate or do not support drivable animation.…”
Section: Related Workmentioning
confidence: 99%
“…The machine learning methods consider player's choices in the animation industry for games and analyze diseases to contribute to the decision-making mechanism [2,6,7,15,34,46]. With the successful implementations of machine learning, attacks on the machine learning process and counter-attack methods and incrementing robustness of learning have become hot research topics in recent years [24,27,31,37,51]. The presence of negative data samples or an attack on the model can lead to producing incorrect results in the predictions and classifications even in the advanced models.…”
Section: Introductionmentioning
confidence: 99%