Figure 1. From left to right: input face image; proxy 3D face, texture and displacement map produced by our framework; detailed face geometry with estimated displacement map applied on the proxy 3D face; and re-rendered facial image. AbstractWe present a single-image 3D face synthesis technique that can handle challenging facial expressions while recovering fine geometric details. Our technique employs expression analysis for proxy face geometry generation and combines supervised and unsupervised learning for facial detail synthesis. On proxy generation, we conduct emotion prediction to determine a new expression-informed proxy. On detail synthesis, we present a Deep Facial Detail Net (DFDN) based on Conditional Generative Adversarial Net (CGAN) that employs both geometry and appearance loss functions. For geometry, we capture 366 high-quality 3D scans from 122 different subjects under 3 facial expressions. For appearance, we use additional 163K in-the-wild face images and apply image-based rendering to accommodate lighting variations. Comprehensive experiments demonstrate that our framework can produce high-quality 3D faces with realistic details under challenging facial expressions.
Image‐based lighting has allowed the creation of photo‐realistic computer‐generated content. However, it requires the accurate capture of the illumination conditions, a task neither easy nor intuitive, especially to the average digital photography enthusiast. This paper presents an approach to directly estimate an HDR light probe from a single LDR photograph, shot outdoors with a consumer camera, without specialized calibration targets or equipment. Our insight is to use a person's face as an outdoor light probe. To estimate HDR light probes from LDR faces we use an inverse rendering approach which employs data‐driven priors to guide the estimation of realistic, HDR lighting. We build compact, realistic representations of outdoor lighting both parametrically and in a data‐driven way, by training a deep convolutional autoencoder on a large dataset of HDR sky environment maps. Our approach can recover high‐frequency, extremely high dynamic range lighting environments. For quantitative evaluation of lighting estimation accuracy and relighting accuracy, we also contribute a new database of face photographs with corresponding HDR light probes. We show that relighting objects with HDR light probes estimated by our method yields realistic results in a wide variety of settings.
Figure 1: Left: Our foveated resolution method running on a commercial video game engine. Right: Our foveated resolution, ambient occlusion, tessellation, and ray-casting (respectively) methods. Areas outwith the circles are the peripheral regions rendered in lower detail.
Animated image sequences often exhibit a large amount of inter-frame coherence which standard rendering algorithms and pipelines are ill-equipped to exploit, limiting their efficiency. To address this inefficiency we transfer rendering results across frames using a novel image warping algorithm based on fixed point iteration. We analyze the behavior of the iteration and describe two alternative algorithms designed to suit different performance requirements. Further, to demonstrate the versatility of our approach we apply it to a number of spatio-temporal rendering problems including 30-to-60Hz frame upsampling, stereoscopic 3D conversion, defocus and motion blur. Finally we compare our approach against existing image warping methods and demonstrate a significant performance improvement.
We propose a new adaptive rendering algorithm that enhances the performance of Monte Carlo ray tracing by reducing the noise, i.e., variance, while preserving a variety of high-frequency edges in rendered images through a novel prediction based reconstruction. To achieve our goal, we iteratively build multiple, but sparse linear models. Each linear model has its prediction window, where the linear model predicts the unknown ground truth image that can be generated with an infinite number of samples. Our method recursively estimates prediction errors introduced by linear predictions performed with different prediction windows, and selects an optimal prediction window minimizing the error for each linear model. Since each linear model predicts multiple pixels within its optimal prediction interval, we can construct our linear models only at a sparse set of pixels in the image screen. Predicting multiple pixels with a single linear model poses technical challenges, related to deriving error analysis for regions rather than pixels, and has not been addressed in the field. We address these technical challenges, and our method with robust error analysis leads to a drastically reduced reconstruction time even with higher rendering quality, compared to state-of-the-art adaptive methods. We have demonstrated that our method outperforms previous methods numerically and visually with high performance ray tracing kernels such as OptiX and Embree.
Dense 3D face reconstruction plays a fundamental role in visual media production involving digital actors. We improve upon high fidelity reconstruction from a single 2D photo with a reconstruction framework that is robust to large variations in expressions, poses and illumination. We provide a global optimization step improving the alignment of 3D facial geometry to tracked 2D landmarks with 3D Laplacian deformation. Face detail is improved through, extending Shape from Shading reconstruction with fitted albedo prior masks, together with a fast proportionality constraint between depth and image gradients consistent with local self-occlusion behavior. Together these measures better preserve the crucial facial features that define an actor's identity, and we illustrate this through a variety of comparisons with related works.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.