Figure 1: Given a portrait image and a few strokes drawn by the user as input (a), our method generates a strand-based 3D hair model as shown in (b), where a fraction of reconstructed fibers are highlighted. The hair model can be used to convert the input portrait into a pop-up model (c) which can be rendered in a novel view (d). It also enables several interesting applications such as transferring the hairstyle of one subject to another (e). Original images courtesy of Getty Images (a) and Andrew MacPherson (e). AbstractHuman hair is known to be very difficult to model or reconstruct. In this paper, we focus on applications related to portrait manipulation and take an application-driven approach to hair modeling. To enable an average user to achieve interesting portrait manipulation results, we develop a single-view hair modeling technique with modest user interaction to meet the unique requirements set by portrait manipulation. Our method relies on heuristics to generate a plausible high-resolution strand-based 3D hair model. This is made possible by an effective high-precision 2D strand tracing algorithm, which explicitly models uncertainty and local layering during tracing. The depth of the traced strands is solved through an optimization, which simultaneously considers depth constraints, layering constraints as well as regularization terms. Our single-view hair modeling enables a number of interesting applications that were previously challenging, including transferring the hairstyle of one subject to another in a potentially different pose, rendering the original portrait in a novel view and image-space hair editing.
Figure 1: Given (a) a single portrait image and a few user strokes, we generated (b) a high quality 3D hair model whose visual fidelity and physical plausibility enabled several dynamic hair manipulating applications, such as (c) physically-based simulation, (d) combing, or (e,f) motion-preserving hair replacement in video. Original images courtesy of Asian Impressions Photography. AbstractThis paper presents a single-view hair modeling technique for generating visually and physically plausible 3D hair models with modest user interaction. By solving an unambiguous 3D vector field explicitly from the image and adopting an iterative hair generation algorithm, we can create hair models that not only visually match the original input very well but also possess physical plausibility (e.g., having strand roots fixed on the scalp and preserving the length and continuity of real strands in the image as much as possible). The latter property enables us to manipulate hair in many new ways that were previously very difficult with a single image, such as dynamic simulation or interactive hair shape editing. We further extend the modeling approach to handle simple video input, and generate dynamic 3D hair models. This allows users to manipulate hair in a video or transfer styles from images to videos.
We propose a novel system to reconstruct a high-quality hair depth map from a single portrait photo with minimal user input. We achieve this by combining depth cues such as occlusions, silhouettes, and shading, with a novel 3D helical structural prior for hair reconstruction. We fit a parametric morphable face model to the input photo and construct a base shape in the face, hair and body regions using occlusion and silhouette constraints. We then estimate the normals in the hair region via a Shape-from-Shading-based optimization that uses the lighting inferred from the face model and enforces an adaptive albedo prior that models the typical color and occlusion variations of hair. We introduce a 3D helical hair prior that captures the geometric structure of hair, and show that it can be robustly recovered from the input photo in an automatic manner. Our system combines the base shape, the normals estimated by Shape from Shading, and the 3D helical hair prior to reconstruct high-quality 3D hair models. Our single-image reconstruction closely matches the results of a state-of-the-art multi-view stereo applied on a multi-view stereo dataset. Our technique can reconstruct a wide variety of hairstyles ranging from short to long and from straight to messy, and we demonstrate the use of our 3D hair models for high-quality portrait relighting, novel view synthesis and 3D-printed portrait reliefs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.