2012
DOI: 10.1145/2185520.2335467
|View full text |Cite
|
Sign up to set email alerts
|

Single-view hair modeling for portrait manipulation

Abstract: Kong (a) (b) (c) (d) (e) Figure 1: Given a portrait image and a few strokes drawn by the user as input (a), our method generates a strand-based 3D hair model as shown in (b), where a fraction of reconstructed fibers are highlighted. The hair model can be used to convert the input portrait into a pop-up model (c) which can be rendered in a novel view (d). It also enables several interesting applications such as transferring the hairstyle of one subject to another (e). Original images courtesy of Getty Images (a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
36
0

Year Published

2016
2016
2020
2020

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 20 publications
(37 citation statements)
references
References 12 publications
1
36
0
Order By: Relevance
“…To popularize hair capture for end-users, existing methods offer various tradeoffs among setup, quality, and robustness, e.g. thermal imaging [17], capturing from multiple views [19] versus single view [9], or requiring different amounts and types of user inputs [11,10,42]. Such methods can reduce manual edits but also limit the output effects to the captured data at hand.…”
Section: Hair Capturementioning
confidence: 99%
See 1 more Smart Citation
“…To popularize hair capture for end-users, existing methods offer various tradeoffs among setup, quality, and robustness, e.g. thermal imaging [17], capturing from multiple views [19] versus single view [9], or requiring different amounts and types of user inputs [11,10,42]. Such methods can reduce manual edits but also limit the output effects to the captured data at hand.…”
Section: Hair Capturementioning
confidence: 99%
“…Human hair is volumetric and often consists of highly intricate 3D structures, such as strands, wisps, buns, and braids, which are difficult to design with traditional 2D interfaces. 3D hair digitization and data-driven techniques can reduce the need for manual labor [27,11,10,25,19,9], but afford limited control for real production environments.…”
Section: Introductionmentioning
confidence: 99%
“…At runtime, the tracked head motion and expressions of the actor are transferred to animate the blendshape model, from which a novel image is rendered. To handle hair, it uses a single-view hair modeling technique [Chai et al 2012] to reconstruct a strand-based 3D hair model, which is also transformed together with the face model and rendered into the final result. As only one input image is used, the resulting avatars are not very expressive and do not have fine-scale details such as dynamic wrinkles.…”
Section: Related Workmentioning
confidence: 99%
“…Chai et al [6,7] demonstrated a single-view hair modeling for manipulation on portrait images and videos. Echevarria et al [9] generated a printable 3D surface with stylized color and geometric details, but they could not reconstruct highly detailed individual hair strands.…”
Section: Related Workmentioning
confidence: 99%
“…Figure 1 shows the pipeline of our method. A set of 2D strand segments with high confidence (denoted as SD) is traced from the input images by the strand tracing method in [7]. Then the patch-based multi-view stereo algorithm (PMVS) [11] is used to reconstruct point cloud with normal.…”
Section: Overviewmentioning
confidence: 99%