Abstract:Abstract-Modern virtual reality simulations require a constant high-frame rate from the rendering engine. They may also require very low latency and stereo images. Previous rendering engines for virtual reality applications have exploited spatial and temporal coherence by using image-warping to re-use previous frames or to render a stereo pair at lower cost than running the full render pipeline twice. However these previous approaches have shown artifacts or have not scaled well with image size. We present a n… Show more
“…We recommend following Schollmeyer et al . 's [SSB*17] A‐buffer method if translucent materials are to be reused.…”
Section: Discussionmentioning
confidence: 99%
“…Schollmeyer et al . [SSB*17] improve on this concept with a tighter grid and support for transparency with an A‐buffer [Eng14] based data structure, where transparent, rasterized fragments are stored and subsequently ray‐traced from the new view. Without that, transparent fragments are difficult to warp as they are usually accumulated (alpha‐blended) and have no distinct depths.…”
Section: Related Workmentioning
confidence: 99%
“…A second class of artefacts is caused by mismatches in exact pixel locations for the reprojected data, i.e. small holes or inaccuracies in warping with coarse grid reprojection [WRK*16, SSB*17, DRE*10]. While some of the arising artefacts can be handled with cheap hole filling schemes, others must be solved by expensive redrawing.…”
Rendering in real time for virtual reality headsets with high user immersion is challenging due to strict framerate constraints as well as due to a low tolerance for artefacts. Eye tracking‐based foveated rendering presents an opportunity to strongly increase performance without loss of perceived visual quality. To this end, we propose a novel foveated rendering method for virtual reality headsets with integrated eye tracking hardware. Our method comprises recycling pixels in the periphery by spatio‐temporally reprojecting them from previous frames. Artefacts and disocclusions caused by this reprojection are detected and re‐evaluated according to a confidence value that is determined by a newly introduced formalized perception‐based metric, referred to as confidence function. The foveal region, as well as areas with low confidence values, are redrawn efficiently, as the confidence value allows for the delicate regulation of hierarchical geometry and pixel culling. Hence, the average primitive processing and shading costs are lowered dramatically. Evaluated against regular rendering as well as established foveated rendering methods, our approach shows increased performance in both cases. Furthermore, our method is not restricted to static scenes and provides an acceleration structure for post‐processing passes.
“…We recommend following Schollmeyer et al . 's [SSB*17] A‐buffer method if translucent materials are to be reused.…”
Section: Discussionmentioning
confidence: 99%
“…Schollmeyer et al . [SSB*17] improve on this concept with a tighter grid and support for transparency with an A‐buffer [Eng14] based data structure, where transparent, rasterized fragments are stored and subsequently ray‐traced from the new view. Without that, transparent fragments are difficult to warp as they are usually accumulated (alpha‐blended) and have no distinct depths.…”
Section: Related Workmentioning
confidence: 99%
“…A second class of artefacts is caused by mismatches in exact pixel locations for the reprojected data, i.e. small holes or inaccuracies in warping with coarse grid reprojection [WRK*16, SSB*17, DRE*10]. While some of the arising artefacts can be handled with cheap hole filling schemes, others must be solved by expensive redrawing.…”
Rendering in real time for virtual reality headsets with high user immersion is challenging due to strict framerate constraints as well as due to a low tolerance for artefacts. Eye tracking‐based foveated rendering presents an opportunity to strongly increase performance without loss of perceived visual quality. To this end, we propose a novel foveated rendering method for virtual reality headsets with integrated eye tracking hardware. Our method comprises recycling pixels in the periphery by spatio‐temporally reprojecting them from previous frames. Artefacts and disocclusions caused by this reprojection are detected and re‐evaluated according to a confidence value that is determined by a newly introduced formalized perception‐based metric, referred to as confidence function. The foveal region, as well as areas with low confidence values, are redrawn efficiently, as the confidence value allows for the delicate regulation of hierarchical geometry and pixel culling. Hence, the average primitive processing and shading costs are lowered dramatically. Evaluated against regular rendering as well as established foveated rendering methods, our approach shows increased performance in both cases. Furthermore, our method is not restricted to static scenes and provides an acceleration structure for post‐processing passes.
“…e main drawback of warping is that it su ers disocclusion artefacts. Some techniques can help ameliorate these, such as perceptually improved hole lling (Didyk et al 2010;Schollmeyer et al 2017). Alternatively the result can be improved by changing the images provided to the algorithm itself (Reinert et al 2016).…”
Section: Previous Workmentioning
confidence: 99%
“…Image similarity is measured in terms of an adapted SSIM (Wang et al 2004) metric. It ignores all disoccluded pixels, i. e., it provides an upper bound on quality to what any hole lling, however sophisticated, could do (Didyk et al 2010;Schollmeyer et al 2017). For foveated comparisons, SSIM is computed for the 64×64 foveal pixels.…”
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.