We present a method to acquire dynamic properties of facial skin appearance, including dynamic diffuse albedo encoding blood flow, dynamic specular intensity, and per-frame high resolution normal maps for a facial performance sequence. The method reconstructs these maps from a purely passive multi-camera setup, without the need for polarization or requiring temporally multiplexed illumination. Hence, it is very well suited for integration with existing passive systems for facial performance capture. To solve this seemingly underconstrained problem, we demonstrate that albedo dynamics during a facial performance can be modeled as a combination of: (1) a static, high-resolution base albedo map, modeling full skin pigmentation; and (2) a dynamic, one-dimensional component in the CIE L*a*b* color space, which explains changes in hemoglobin concentration due to blood flow. We leverage this albedo subspace and additional constraints on appearance and surface geometry to also estimate specular reflection parameters and resolve high-resolution normal maps with unprecedented detail in a passive capture system. These constraints are built into an inverse rendering framework that minimizes the difference of the rendered face to the captured images, incorporating constraints from multiple views for every texel on the face. The presented method is the first system capable of capturing high-quality dynamic appearance maps at full resolution and video framerates, providing a major step forward in the area of facial appearance acquisition.
Figure 1: Examples of surface reflectance recovered using mobile reflectometry: (a) A spatially varying rough specular material acquired using our handheld mobile flash-based reflectometry. (b) Highly specular surface reflectance recovered using mobile LCD-based reflectometry, with enhanced mesostructure from close-up observations under natural lighting. (c) Surface reflectance of a large spatial varying material sample recovered using appearance transfer under natural lighting from surface reflectance obtained using the LCD-based approach for a small reference patch. AbstractWe present two novel mobile reflectometry approaches for acquiring detailed spatially varying isotropic surface reflectance and mesostructure of a planar material sample using commodity mobile devices. The first approach relies on the integrated camera and flash pair present on typical mobile devices to support free-form handheld acquisition of spatially varying rough specular material samples. The second approach, suited for highly specular samples, uses the LCD panel to illuminate the sample with polarized second order gradient illumination. To address the limited overlap of the front facing camera's view and the LCD illumination (and thus limited sample size), we propose a novel appearance transfer method that combines controlled reflectance measurement of a small exemplar section with uncontrolled reflectance measurements of the full sample under natural lighting. Finally, we introduce a novel surface detail enhancement method that adds fine scale surface mesostructure from close-up observations under uncontrolled natural lighting. We demonstrate the accuracy and versatility of the proposed mobile reflectometry methods on a wide variety of spatially varying materials.
We present a novel approach for on-site acquisition of surface reflectance for planar, spatially varying, isotropic samples in uncontrolled outdoor environments. Our method exploits the naturally occurring linear polarization of incident and reflected illumination for this purpose. By rotating a linear polarizing filter in front of a camera at three different orientations, we measure the polarization reflected off the sample and combine this information with multi-view analysis and inverse rendering in order to recover per-pixel, high resolution reflectance and surface normal maps. Specifically, we employ polarization imaging from two near orthogonal views close to the Brewster angle of incidence in order to maximize polarization cues for surface reflectance estimation. To the best of our knowledge, our method is the first to successfully extract a complete set of reflectance parameters with passive capture in completely uncontrolled outdoor settings. To this end, we analyze our approach under the general, but previously unstudied, case of incident partial linear polarization (due to the sky) in order to identify the strengths and weaknesses of the method under various outdoor conditions. We provide practical guidelines for on-site acquisition based on our analysis, and demonstrate high quality results with an entry level DSLR as well as a mobile phone.
We propose a new light-weight face capture system capable of reconstructing both high-quality geometry and detailed appearance maps from a single exposure. Unlike currently employed appearance acquisition systems, the proposed technology does not require active illumination and hence can readily be integrated with passive photogrammetry solutions. These solutions are in widespread use for 3D scanning humans as they can be assembled from off-the-shelf hardware components, but lack the capability of estimating appearance. This paper proposes a solution to overcome this limitation, by adding appearance capture to photogrammetry systems. The only additional hardware requirement to these solutions is that a subset of the cameras are cross-polarized with respect to the illumination, and the remaining cameras are parallel-polarized. The proposed algorithm leverages the images with the two different polarization states to reconstruct the geometry and to recover appearance properties. We do so by means of an inverse rendering framework, which solves per texel diffuse albedo, specular intensity, and high-resolution normals, as well as global specular roughness considering the subsurface scattering nature of skin. We show results for a variety of human subjects of different ages and skin typology, illustrating how the captured fine-detail skin surface and subsurface scattering effects lead to realistic renderings of their digital doubles, also in different illumination conditions.
Figure 1: Examples of surface reflectance recovered using mobile reflectometry: (a) A spatially varying rough specular material acquired using our handheld mobile flash-based reflectometry. (b) Highly specular surface reflectance recovered using mobile LCD-based reflectometry, with enhanced mesostructure from close-up observations under natural lighting. (c) Surface reflectance of a large spatial varying material sample recovered using appearance transfer under natural lighting from surface reflectance obtained using the LCD-based approach for a small reference patch. AbstractWe present two novel mobile reflectometry approaches for acquiring detailed spatially varying isotropic surface reflectance and mesostructure of a planar material sample using commodity mobile devices. The first approach relies on the integrated camera and flash pair present on typical mobile devices to support free-form handheld acquisition of spatially varying rough specular material samples. The second approach, suited for highly specular samples, uses the LCD panel to illuminate the sample with polarized second order gradient illumination. To address the limited overlap of the front facing camera's view and the LCD illumination (and thus limited sample size), we propose a novel appearance transfer method that combines controlled reflectance measurement of a small exemplar section with uncontrolled reflectance measurements of the full sample under natural lighting. Finally, we introduce a novel surface detail enhancement method that adds fine scale surface mesostructure from close-up observations under uncontrolled natural lighting. We demonstrate the accuracy and versatility of the proposed mobile reflectometry methods on a wide variety of spatially varying materials.
International audienceno abstrac
For several decades, researchers have been advancing techniques for creating and rendering 3D digital faces, where a lot of the effort has gone into geometry and appearance capture, modeling and rendering techniques. This body of research work has largely focused on facial skin, with much less attention devoted to peripheral components like hair, eyes and the interior of the mouth. As a result, even with the best technology for facial capture and rendering, in most high-end productions a lot of artist time is still spent modeling the missing components and fine-tuning the rendering parameters to combine everything into photo-real digital renders. In this work we propose to combine incomplete, high-quality renderings showing only facial skin with recent methods for neural rendering of faces, in order to automatically and seamlessly create photo-realistic full-head portrait renders from captured data without the need for artist intervention. Our method begins with traditional face rendering, where the skin is rendered with the desired appearance, expression, viewpoint, and illumination. These skin renders are then projected into the latent space of a pre-trained neural network that can generate arbitrary photo-real face images (StyleGAN2). The result is a sequence of realistic face images that match the identity and appearance of the 3D character at the skin level, but is completed naturally with synthesized hair, eyes, inner mouth and surroundings. Notably, we present the first method for multi-frame consistent projection into this latent space, allowing photo-realistic rendering and preservation of the identity of the digital human over an animated performance sequence, which can depict different expressions, lighting conditions and viewpoints. Our method can be used in new face rendering pipelines and, importantly, in other deep learning applications that require large amounts of realistic training data with ground-truth 3D geometry, appearance maps, lighting, and viewpoint.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.