Ground Truth[dLE07] [JZJ * 15] Ours Figure 1: Our novel translucency technique allows for rendering of realistic translucency effects in real time. This effect can be seen on thin surfaces of translucent materials such as the ears of the human head in the leftmost picture. The other pictures show a close-up view of the ear rendered with different existing real-time techniques compared to the ground truth. AbstractRendering translucent materials in real time is usually done by using surface diffusion and/or (translucent) shadow maps. The downsides of these approaches are, that surface diffusion cannot handle translucency effects that show up when rendering thin objects, and that translucent shadow maps are only available for point light sources. Furthermore, translucent shadow maps introduce limitations to shadow mapping techniques exploiting the same maps. In this paper we present a novel approach for rendering translucent materials at interactive frame rates. Our approach allows for an efficient calculation of translucency with native support for general illumination conditions, especially area and environment lighting, at high accuracy. The proposed technique's only parameter is the used diffusion profile, and thus it works out of the box without any parameter tuning. Furthermore, it can be used in combination with any existing surface diffusion techniques to add translucency effects. Our approach introduces Spatial Adjacency Maps that depend on precalculations to be done for fixed meshes. We show that these maps can be updated in real time to also handle deforming meshes and that our results are of superior quality as compared to other well known real-time techniques for rendering translucency.cency to some degree which often results in a smooth appearance due to light scattering inside these materials. Depending on how much light is absorbed when passing through a medium and the depth of an object, translucency effects can also appear when light shines through an object. The process leading to these effects is called subsurface scattering. Subsurface scattering is very important for the appearance of materials like skin, marble or candle wax.
We suggest a method to directly deep‐learn light transport, i. e., the mapping from a 3D geometry‐illumination‐material configuration to a shaded 2D image. While many previous learning methods have employed 2D convolutional neural networks applied to images, we show for the first time that light transport can be learned directly in 3D. The benefit of 3D over 2D is, that the former can also correctly capture illumination effects related to occluded and/or semi‐transparent geometry. To learn 3D light transport, we represent the 3D scene as an unstructured 3D point cloud, which is later, during rendering, projected to the 2D output image. Thus, we suggest a two‐stage operator comprising a 3D network that first transforms the point cloud into a latent representation, which is later on projected to the 2D output image using a dedicated 3D‐2D network in a second step. We will show that our approach results in improved quality in terms of temporal coherence while retaining most of the computational efficiency of common 2D methods. As a consequence, the proposed two stage‐operator serves as a valuable extension to modern deferred shading approaches.
Many lighting methods used in computer graphics such as indirect illumination can have very high computational costs and need to be approximated for real-time applications. These costs can be reduced by means of upsampling techniques which tend to introduce artifacts and affect the visual quality of the rendered image. This paper suggests a versatile approach for accelerating the rendering of screen space methods while maintaining the visual quality. This is achieved by exploiting the low frequency nature of many of these illumination methods and the geometrical continuity of the scene. First the screen space is dynamically divided into separate sub-images, then the illumination is rendered for each sub-image in an adequate resolution and finally the sub-images are put together in order to compose the final image. Therefore we identify edges in the scene and generate masks precisely specifying which part of the image is included in which sub-image. The masks therefore determine which part of the image is rendered in which resolution. A step wise upsampling and merging process then allows optically soft transitions between the different resolution levels. For this paper, the introduced multi-resolution rendering method was implemented and tested on three commonly used lighting methods. These are screen space ambient occlusion, soft shadow mapping and screen space global illumination.
Existing algorithms for rendering subsurface scattering in real time cannot deal well with scattering over longer distances. Kernels for image space algorithms become very large in these circumstances and separation does not work anymore, while geometry‐based algorithms cannot preserve details very well. We present a novel approach that deals with all these downsides. While for lower scattering distances, the advantages of geometry‐based methods are small, this is not the case anymore for high scattering distances (as we will show). Our proposed method takes advantage of the highly detailed results of image space algorithms and combines it with a geometry‐based method to add the essential scattering from sources not included in image space. Our algorithm does not require pre‐computation based on the scene's geometry, it can be applied to static and animated objects directly. Our method is able to provide results that come close to ray‐traced images which we will show in direct comparisons with images generated by PBRT. We will compare our results to state of the art techniques that are applicable in these scenarios and will show that we provide superior image quality while maintaining interactive rendering times.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.