Monte Carlo (MC) integration is used ubiquitously in realistic image synthesis because of its flexibility and generality. However, the integration has to balance estimator bias and variance, which causes visually distracting noise with low sample counts. Existing solutions fall into two categories, in-process sampling schemes and post-processing reconstruction schemes. This report summarizes recent trends in the post-processing reconstruction scheme. Recent years have seen increasing attention and significant progress in denoising MC rendering with deep learning, by training neural networks to reconstruct denoised rendering results from sparse MC samples. Many of these techniques show promising results in real-world applications, and this report aims to provide an assessment of these approaches for practitioners and researchers.
Real‐time Monte Carlo denoising aims at removing severe noise under low samples per pixel (spp) in a strict time budget. Recently, kernel‐prediction methods use a neural network to predict each pixel's filtering kernel and have shown a great potential to remove Monte Carlo noise. However, the heavy computation overhead blocks these methods from real‐time applications. This paper expands the kernel‐prediction method and proposes a novel approach to denoise very low spp (e.g., 1‐spp) Monte Carlo path traced images at real‐time frame rates. Instead of using the neural network to directly predict the kernel map, i.e., the complete weights of each per‐pixel filtering kernel, we predict an encoding of the kernel map, followed by a high‐efficiency decoder with unfolding operations for a high‐quality reconstruction of the filtering kernels. The kernel map encoding yields a compact single‐channel representation of the kernel map, which can significantly reduce the kernel‐prediction network's throughput. In addition, we adopt a scalable kernel fusion module to improve denoising quality. The proposed approach preserves kernel prediction methods’ denoising quality while roughly halving its denoising time for 1‐spp noisy inputs. In addition, compared with the recent neural bilateral grid‐based real‐time denoiser, our approach benefits from the high parallelism of kernel‐based reconstruction and produces better denoising results at equal time.
Serious noise affects the rendering of global illumination using Monte Carlo (MC) path tracing when insufficient samples are used. The two common solutions to this problem are filtering noisy inputs to generate smooth but biased results and sampling the MC integrand with a carefully crafted probability distribution function (PDF) to produce unbiased results. Both solutions benefit from an efficient incident radiance field sampling and reconstruction algorithm. This study proposes a method for training quality and reconstruction networks (Q- and R-networks, respectively) with a massive offline dataset for the adaptive sampling and reconstruction of first-bounce incident radiance fields. The convolutional neural network (CNN)-based R-network reconstructs the incident radiance field in a 4D space, whereas the deep reinforcement learning (DRL)-based Q-network predicts and guides the adaptive sampling process. The approach is verified by comparing it with state-of-the-art unbiased path guiding methods and filtering methods. Results demonstrate improvements for unbiased path guiding and competitive performance in biased applications, including filtering and irradiance caching.
Instead of computing on a large number of virtual point lights (VPLs), scalable many-lights rendering methods effectively simulate various illumination effects only using hundreds or thousands of representative VPLs. However, gathering illuminations from these representative VPLs, especially computing the visibility, is still a tedious and time-consuming task. In this paper, we propose a new matrix sampling-and-recovery scheme to efficiently gather illuminations by only sampling a small number of visibilities between representative VPLs and surface points. Our approach is based on the observation that the lighting matrix used in manylights rendering is of low-rank, so that it is possible to sparsely sample a small number of entries, and then numerically complete the entire matrix. We propose a three-step algorithm to explore this observation. First, we design a new VPL clustering algorithm to slice the rows and group the columns of the full lighting matrix into a number of reduced matrices, which are sampled and recovered individually. Second, we propose a novel prediction method that predicts visibility of matrix entries from sparsely and randomly sampled entries. Finally, we adapt the matrix separation technique to recover the entire reduced matrix and compute final shadings. Experimental results show that our method heavily reduces the required visibility sampling in the final gathering and achieves 3--7 times speedup compared with the state-of-the-art methods on test scenes.
Figure 1: Example scenes rendered using our approach on an NVIDIA GTX 680 GPU with 2GB of memory. The left image is a museum scene, which consists of 117.1 million triangles and 32.4 million lights. The total storage sizes of geometry and lights are 14.3 GB and 3.75 GB respectively. The middle image shows an airport scene with two Boeing 777 models that has total 669.3 million (46.3 GB) triangles and 18 million (2.1 GB) lights. The right image is a carnival scene. There are 17.1 million (1.76 GB) triangles and 256 (29.6 GB) million lights. Our method takes 3m55s, 10m15s and 1m22s to shade the museum, the airport and the carnival scenes respectively, and requires an additional 1m20s, 7m25s and 1m14s to build acceleration structures on these lights and geometry. AbstractIn this paper, we present a GPU-based out-of-core rendering approach under the many-lights rendering framework. Many-lights rendering is an efficient and scalable rendering framework for a large number of lights. But when the data sizes of lights and geometry are both beyond the in-core memory storage size, the data management of these two out-of-core data becomes critical. In our approach, we formulate such a data management as a graph traversal optimization problem that first builds out-of-core lights and geometry data into a graph, and then guides shading computations by finding a shortest path to visit all vertices in the graph. Based on the proposed data management, we develop a GPU-based out-of-GPU-core rendering algorithm that manages data between the CPU host memory and the GPU device memory. Two main steps are taken in the algorithm: the out-of-core data preparation to pack data into optimal data layouts for the many-lights rendering, and the outof-core shading using graph-based data management. We demonstrate our algorithm on scenes with out-of-core detailed geometry and out-of-core lights. Results show that our approach generates complex global illumination effects with increased data access coherence and has one order of magnitude performance gain over the CPU-based approach.
Several scalable many-light rendering methods have been proposed recently for the efficient computation of global illumination. However, gathering contributions of virtual lights in participating media remains an inefficient and time-consuming task. In this paper, we present a novel sparse sampling and reconstruction method to accelerate the gathering step of the many-light rendering for participating media. Our technique explores the observation that the scattered lightings are usually locally coherent and of low rank even in heterogeneous media. In particular, we first introduce a matrix formation with light segments as columns and eye ray segments as rows, and formulate the gathering step into a matrix sampling and reconstruction problem. We then propose an adaptive matrix column sampling and completion algorithm to efficiently reconstruct the matrix by only sampling a small number of elements. Experimental results show that our approach greatly improves the performance, and obtains up to one order of magnitude speedup compared with other state-of-the-art methods of many-light rendering for participating media.
Image-space auxiliary features such as surface normal have significantly contributed to the recent success of Monte Carlo (MC) reconstruction networks. However, path-space features, another essential piece of light propagation, have not yet been sufficiently explored. Due to the curse of dimensionality, information flow between a regression loss and high-dimensional path-space features is sparse, leading to difficult training and inefficient usage of path-space features in a typical reconstruction framework. This paper introduces a contrastive manifold learning framework to utilize path-space features effectively. The proposed framework employs weakly-supervised learning that converts reference pixel colors to dense pseudo labels for light paths. A convolutional path-embedding network then induces a low-dimensional manifold of paths by iteratively clustering intra-class embeddings, while discriminating inter-class embeddings using gradient descent. The proposed framework facilitates path-space exploration of reconstruction networks by extracting low-dimensional yet meaningful embeddings within the features. We apply our framework to the recent image- and sample-space models and demonstrate considerable improvements, especially on the sample space. The source code is available at https://github.com/Mephisto405/WCMC.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.