Figure 1: A comparison of our vertex connection and merging (VCM) algorithm against bidirectional path tracing (BPT) and stochastic progressive photon mapping (PPM).Overview. Light transport simulation is an essential element in realistic image synthesis for computer-generated imagery. However, developing robust light transport simulation algorithms that are capable of dealing with arbitrary input scenes (scene geometry, surface reflectance, light sources) remains an elusive challenge. Although efficient light transport algorithms exist, an acceptable approximation error in a reasonable amount of time is usually only achieved for specific types of inputs. To address this problem, we present [1] a reformulation of the popular density estimator, known in computer graphics as "photon mapping" [2-4], as a bidirectional path sampling technique for Monte Carlo light transport simulation [6]. The benefit of our new formulation is twofold. First, it makes it possible to explain the relative efficiency of photon mapping and bidirectional path tracing [5,7,8] algorithms, which have so far been considered conceptually incompatible solutions. Perhaps more importantly, it allows for a seamless integration of the two methods into a more robust combined light transport simulation algorithm, dubbed vertex connection and merging, or VCM. A progressive version of this algorithm is consistent and efficiently handles a wide variety of lighting conditions, ranging from direct illumination and diffuse inter-reflections to specular-diffuse-specular light transport, which is notoriously difficult for bidirectional path tracing. Our theoretical analysis shows that VCM inherits the high asymptotic performance from bidirectional path tracing for most light transport path types, while benefiting from the efficiency of photon mapping for specular-diffuse-specular lighting effects.Results. A comparison of our vertex connection and merging (VCM) algorithm against bidirectional path tracing (BPT) and progressive photon mapping (PPM) [2,4] after 30 min of rendering is shown in Figure 1. BPT fails to reproduce the light focused by the vase and reflected in the mirror (specular-diffuse-specular transport paths), while PPM has difficulties handling the illumination coming from the room seen in the mirror. Our VCM algorithm automatically computes a good mixture of sampling techniques from BPT and PPM to robustly capture the entire illumination. The rightmost column shows, in false color, the relative contributions of the path sampling techniques from BPT and PPM, respectively, to the VCM image.
Significant advances have been achieved for realtime ray tracing recently, but realtime performance for complex scenes still requires large computational resources not yet available from the CPUs in standard PCs. Incidentally, most of these PCs also contain modern GPUs that do offer much larger raw compute power. However, limitations in the programming and memory model have so far kept the performance of GPU ray tracers well below that of their CPU counterparts. In this paper we present a novel packet ray traversal implementation that completely eliminates the need for maintaining a stack during kd-tree traversal and that reduces the number of traversal steps per ray. While CPUs benefit moderately from the stackless approach, it improves GPU performance significantly. We achieve a peak performance of over 16 million rays per second for reasonably complex scenes, including complex shading and secondary rays. Several examples show that with this new technique GPUs can actually outperform equivalent CPU based ray tracers.
The visualization of high-quality isosurfaces at interactive rates is an important tool in many simulation and visualization applications. Today, isosurfaces are most often visualized by extracting a polygonal approximation that is then rendered via graphics hardware or by using a special variant of preintegrated volume rendering. However, these approaches have a number of limitations in terms of the quality of the isosurface, lack of performance for complex data sets, or supported shading models. An alternative isosurface rendering method that does not suffer from these limitations is to directly ray trace the isosurface. However, this approach has been much too slow for interactive applications unless massively parallel shared-memory supercomputers have been used. In this paper, we implement interactive isosurface ray tracing on commodity desktop PCs by building on recent advances in real-time ray tracing of polygonal scenes and using those to improve isosurface ray tracing performance as well. The high performance and scalability of our approach will be demonstrated with several practical examples, including the visualization of highly complex isosurface data sets, the interactive rendering of hybrid polygonal/isosurface scenes, including high-quality ray traced shading effects, and even interactive global illumination on isosurfaces.
Recent GPU ray tracers can already achieve performance competitive to that of their CPU counterparts. Nevertheless, these systems can not yet fully exploit the capabilities of modern GPUs and can only handle medium-sized, static scenes. In this paper we present a BVH-based GPU ray tracer with a parallel packet traversal algorithm using a shared stack. We also present a fast, CPU-based BVH construction algorithm which very accurately approximates the surface area heuristic using streamed binning while still being one order of magnitude faster than previously published results. Furthermore, using a BVH allows us to push the size limit of supported scenes on the GPU: We can now ray trace the 12.7~million triangle \textsc{Power Plant} at 1024$\times$1024 image resolution with 3~fps, including shading and shadows
a) Small foveal region with (r 0 = 5 , r 1 = 10 , p min = 0.01) (b) Medium foveal region with (r 0 = 10 , r 1 = 20 , p min = 0.05) (c) Full Renderer Figure 1: Images generated by using our foveated renderer showing the effect of different configurations for the foveal region, including an image that was rendered by ray tracing every pixel. AbstractHead-mounted displays with dense pixel arrays used for virtual reality applications require high frame rates and low latency rendering. This forms a challenging use case for any rendering approach. In addition to its ability of generating realistic images, ray tracing offers a number of distinct advantages, but has been held back mainly by its performance. In this paper, we present an approach that significantly improves image generation performance of ray tracing. This is done by combining foveated rendering based on eye tracking with reprojection rendering using previous frames in order to drastically reduce the number of new image samples per frame. To reproject samples a coarse geometry is reconstructed from a G-Buffer. Possible errors introduced by this reprojection as well as parts that are critical to the perception are scheduled for resampling. Additionally, a coarse color buffer is used to provide an initial image, refined smoothly by more samples were needed. Evaluations and user tests show that our method achieves real-time frame rates, while visual differences compared to fully rendered images are hardly perceivable. As a result, we can ray trace non-trivial static scenes for the Oculus DK2 HMD at 1182 ⇥ 1464 per eye within the the VSync limits without perceived visual differences.
We present a fast, parallel GPU algorithm for construction of uniform grids for ray tracing, which we implement in CUDA. The algorithm performance does not depend on the primitive distribution, because we reduce the problem to sorting pairs of primitives and cell indices. Our implementation is able to take full advantage of the parallel architecture of the GPU, and construction speed is faster than CPU algorithms running on multiple cores. Its scalability and robustness make it superior to alternative approaches, especially for scenes with complex primitive distributions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.