Micro-appearance models explicitly model the interaction of light with microgeometry at the fiber scale to produce realistic appearance. To effectively match them to real fabrics, we introduce a new appearance matching framework to determine their parameters. Given a micro-appearance model and photographs of the fabric under many different lighting conditions, we optimize for parameters that best match the photographs using a method based on calculating derivatives during rendering. This highly applicable framework, we believe, is a useful research tool because it simplifies development and testing of new models. Using the framework, we systematically compare several types of microappearance models. We acquired computed microtomography (micro CT) scans of several fabrics, photographed the fabrics under many viewing/ illumination conditions, and matched several appearance models to this data. We compare a new fiber-based light scattering model to the previously used microflake model. We also compare representing cloth microgeometry using volumes derived directly from the micro CT data to using explicit fibers reconstructed from the volumes. From our comparisons, we make the following conclusions: (1) given a fiber-based scattering model, volumeand fiber-based microgeometry representations are capable of very similar quality, and (2) using a fiber-specific scattering model is crucial to good results as it achieves considerably higher accuracy than prior work.
Scenes modeling the real-world combine a wide variety of phenomena including glossy materials, detailed heterogeneous anisotropic media, subsurface scattering, and complex illumination. Predictive rendering of such scenes is difficult; unbiased algorithms are typically too slow or too noisy. Virtual point light (VPL) based algorithms produce low noise results across a wide range of performance/accuracy tradeoffs, from interactive rendering to high quality offline rendering, but their bias means that locally important illumination features may be missing.We introduce a bidirectional formulation and a set of weighting strategies to significantly reduce the bias in VPL-based rendering algorithms. Our approach, bidirectional lightcuts, maintains the scalability and low noise global illumination advantages of prior VPL-based work, while significantly extending their generality to support a wider range of important materials and visual cues. We demonstrate scalable, efficient, and low noise rendering of scenes with highly complex materials including gloss, BSSRDFs, and anisotropic volumetric models.
We study the problem of creating a character model that can be controlled in real time from a single image of an anime character. A solution to this problem would greatly reduce the cost of creating avatars, computer games, and other interactive applications.Talking Head Anime 3 ( THA3) is an open source project that attempts to directly address the problem [34]. It takes as input (1) an image of an anime character's upper body and (2) a 45-dimensional pose vector and outputs a new image of the same character taking the specified pose. The range of possible movements is expressive enough for personal avatars and certain types of game characters. However, the system is too slow to generate animations in real time on common PCs, and its image quality can be improved.In this paper, we improve THA3 in two ways. First, we propose new architectures for constituent networks that rotate the character's head and body based on U-Nets with attention [23] that are widely used in modern generative models. The new architectures consistently yield better image quality than the THA3 baseline. Nevertheless, they also make the whole system much slower: it takes up to 150 milliseconds to generate a frame. Second, we propose a technique to distill the system into a small network (< 2 MB) that can generate 512 × 512 animation frames in real time (≥ 30 FPS) using consumer gaming GPUs while keeping the image quality close to that of the full system. This improvement makes the whole system practical for real-time applications.
The appearance of hair follows from the small-scale geometry of hair fibers, with the cross-sectional shape determining the azimuthal distribution of scattered light. Although previous research has described some of the effects of non-circular cross sections, no accurate scattering models for non-circular fibers exist. This article presents a scattering model for elliptical fibers, which predicts that even small deviations from circularity produce important changes in the scattering distribution and which disagrees with previous approximations for the effects of eccentricity. To confirm the model’s predictions, new scattering measurements of fibers from a wide range of hair types were made, using a new measurement device that provides a more complete and detailed picture of the light scattered by fibers than was previously possible. The measurements show features that conclusively match the model’s predictions, but they also contain an ideal-specular forward-scattering behavior that is not predicted and has not been fully described before. The results of this article indicate that an accurate and efficient method for computing scattering in elliptical cylinders—something not provided in this article—is the correct model to use for realistic hair in the future and that the new specular behavior should be included as well.
+ = Standard VPL method Bidirectional estimators Our result (BDLC) PP Photon Map (=time) Bidir Path Trace(=time) Zoom in on path trace Zoom in on our result Close view of clothFigure 1: This kitchen scene combines many different phenomena including glossy surfaces, subsurface BSSRDFs (e.g., milk and dragon), heterogeneous smoke, a highly detailed anisotropic volumetric cloth model (over billion voxel effective resolution, see bottom right), skylight through three windows and 25 local lights. Computing global illumination in such a scene is extremely challenging and standard VPL methods cannot capture many of the perceptually important illumination details. Our bidirectional method extends VPL-based techniques to handle a wider range of such phenomena (top row). A bidirectional path traced result of equal time is extremely noisy (see zoom ins) while bidirectional lightcuts maintains the low noise and scalability advantages of VPL-based methods. A probabilistic progressive photon map image (bottom left) of equal time shows visible noise (e.g., from glossy paths) and bias around small features (e.g., very thin cloth, <5mm). AbstractScenes modeling the real-world combine a wide variety of phenomena including glossy materials, detailed heterogeneous anisotropic media, subsurface scattering, and complex illumination. Predictive rendering of such scenes is difficult; unbiased algorithms are typically too slow or too noisy. Virtual point light (VPL) based algorithms produce low noise results across a wide range of performance/accuracy tradeoffs, from interactive rendering to high quality offline rendering, but their bias means that locally important illumination features may be missing.We introduce a bidirectional formulation and a set of weighting strategies to significantly reduce the bias in VPL-based rendering algorithms. Our approach, bidirectional lightcuts, maintains the scalability and low noise global illumination advantages of prior VPL-based work, while significantly extending their generality to support a wider range of important materials and visual cues. We demonstrate scalable, efficient, and low noise rendering of scenes with highly complex materials including gloss, BSSRDFs, and anisotropic volumetric models.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.