Improving Interactive RenderingIn recent years a number of traditionally offline rendering algorithms have become interactive or nearly so. The introduction of programmable high-precision graphics processors (GPUs) has drastically expanded the range of algorithms that can be employed in real-time graphics; meanwhile, the steady progress of Moore's Law has made techniques such as ray tracing, long considered a slow algorithm suited only for offline realistic rendering, feasible in real-time rendering settings. These trends are related; indeed, some interactive global illumination research performs algorithms such as ray tracing and photon mapping directly on the GPU [PBMH02]. Future hardware should provide even better support for these algorithms, bringing us closer to the day when ray-based algorithms are an accepted and powerful component of every interactive rendering system.What makes interactive ray tracing attractive? Researchers in the area have commented on ray tracing's ability to model physically accurate global illumination phenomena, its easy applicability to different shaders and primitives, and its output-sensitive running time, which is only weakly dependent on scene complexity [WPS*03]. We focus on another unique capability: selective sampling of the image plane. By design, depth-buffered rasterization must generate an entire image at a given time, but ray-tracing can focus rendering with very fine granularity. This ability enables a new approach to rendering that is both more interactive and more accurate.The topic of sampling in ray tracing may seem nearly exhausted, but most previous work has focused on spatial (right). Resulting imagery has similar visual quality to a framed renderer but is produced using an order of magnitude fewer samples per second.
Realtime rendering requires accurate display of a dynamic scene with minimal delay. Frameless rendering [Bishop et al. 1994] offers unique flexibility in this regard: because it samples time per pixel, it can respond to change with very little delay, and at any location in the image. However, sampling is random, resulting in blurring in changing image regions. We present an approach for improving frameless rendering by making sampling sensitive to change in the image, as suggested in [Bishop et al. 1994]. By measuring this change in visual terms, we are able to direct sampling to those regions of change. The resulting algorithm produces sharper imagery, while introducing minimal overhead into the standard frameless algorithm.We measure change by monitoring color differences in the image, using the summed squared difference between component colors at a pixel in the previous and current rendering. We use a probability distribution function (PDF) to choose the next pixel rendered so that changing image regions are sampled more frequently. The probability of every pixel is the weighted sum of its color difference and its age (time since it was last updated) both normalized over the entire image. The former biases rendering toward regions of change and the latter monitors for change in previously static image regions. This ensures that all pixels are sampled with a certain minimal frequency. The PDF is subsampled into rectangular tiles in image space. Besides bringing obvious improvements in speed, subsampling implements a spatially coherent response to change: if one pixel is changing, neighboring pixels are likely also changing. The probability that one of the pixels in a tile will be rendered is computed using the summed probabilities of the component pixels, normalized by the summed probabilities of all pixels in the image. In order to determine which pixel to render, we first select the tile according to the subsampled PDF, and then randomly select a pixel within that tile with bilinear interpolation of the surrounding tile probabilities just as in bilinear texture filtering.Our renderer displays sharper imagery while using the same number of rays as a conventional frameless renderer. Figures 1 and 2 show corresponding frames of a video at a simulated rendering rate of 900,000 rays per second. In interactive use, our current renderer casts roughly two thirds as many rays per second as the standard renderer, but the resulting images are still sharper.Our major goals in future research will be improving dynamic image quality, evaluating and tuning the performance vs. accuracy tradeoff, and comparing this approach to existing approaches. To improve image quality, we will investigate alternative image quality metrics, including sensitivity to spatial contrast and Gibsonian patterns of motion. We will also incorporate progressive rendering [Bergman et al. 1986] into the ray tracer, enabling control of the tradeoff between temporal and spatial accuracy -a line of research that we believe will be particularly fruitfu...
Making computer imagery more responsive and realistic is one of the most basic goals of graphics researchers, and adaptive display is one of the primary means for achieving it. While previous displays have achieved a spatial adaptivity, our research focuses on achieving temporal adaptivity--sampling some regions not only more densely, but also more often. We use closed loop feedback to guide sampling to image regions that change significantly over space or time. Adaptive reconstruction emphasizes older samples in static settings, resulting in sharper images; and new samples in dynamic settings, resulting in images that may be blurry but are up-to-date. In terms of peak signal-to-noise ratio, this prototype produces much better image streams than nonadaptive renderers with the same simulated sampling rates. This new display also offers new opportunities for adapting to user state, allowing adaptive response both where and when it is needed. Our prototype system already responds interactively to changes in the user's viewpoint, it might also respond to any of a number of other indications of user state, including eye tracking, repeatedly manipulated objects, and biometrics.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.