Figure 1: Left: Our foveated resolution method running on a commercial video game engine. Right: Our foveated resolution, ambient occlusion, tessellation, and ray-casting (respectively) methods. Areas outwith the circles are the peripheral regions rendered in lower detail.
We propose an end-to-end solution for presenting movie quality animated graphics to the user while still allowing the sense of presence afforded by free viewpoint head motion. By transforming offline rendered movie content into a novel immersive representation, we display the content in real-time according to the tracked head pose. For each frame, we generate a set of cubemap images per frame (colors and depths) using a sparse set of of cameras placed in the vicinity of the potential viewer locations. The cameras are placed with an optimization process so that the rendered data maximise coverage with minimum redundancy, depending on the lighting environment complexity. We compress the colors and depths separately, introducing an integrated spatial and temporal scheme tailored to high performance on GPUs for Virtual Reality applications. A view-dependent decompression algorithm decodes only the parts of the compressed video streams that are visible to users. We detail a real-time rendering algorithm using multi-view ray casting, with a variant that can handle strong view dependent effects such as mirror surfaces and glass. Compression rates of 150:1 and greater are demonstrated with quantitative analysis of image reconstruction quality and performance.
Parameterisation of models is typically generated for a single pose, the rest pose. When a model deforms, its parameterisation characteristics change, leading to distortions in the appearance of texturemapped mesostructure. Such distortions are undesirable when the represented surface detail is heterogeneous in terms of elasticity (e.g. texture with skin and bone) as the material looks "rubbery". In this paper we introduce a technique that preserves the appearance of heterogeneous elasticity textures mapped on deforming surfaces by calculating dense, content-aware parameterisation warps in realtime. We demonstrate the usefulness of our method in a variety of scenarios: from application to production-quality assets, to realtime modelling previews and digital acting.
Figure 1: Our filtering results obtained for frame #320 of the SPACELAND sequence. This scene showcases a non-linear camera motion, an animated directional light, and various shading effects including shadows and reflections. This scene has been rendered with Unreal Engine 4 [Epi] using 1 sample per pixel (Non-AA). The presence of fine geometric details and detailed textures produces lots of temporal aliasing and flickering artifacts. Our filtering method effectively reduces flickering without creating ghosting artifacts (please watch the supplementary video). Moreover, our approach produces less visual overblur (see insets) than the current state-of-the-art solutions for real-time temporal antialiasing, e.g., 1.33 dB better PSNR on average than Unreal Engine temporal filter (UE4-TAA). AbstractWe propose a new real-time temporal filtering and antialiasing (AA) method for rasterization graphics pipelines. Our method is based on Pixel History Linear Models (PHLM), a new concept for modeling the history of pixel shading values over time using linear models. Based on PHLM, our method can predict per-pixel variations of the shading function between consecutive frames. This combines temporal reprojection with per-pixel shading predictions in order to provide temporally coherent shading, even in the presence of very noisy input images. Our method can address both spatial and temporal aliasing problems under a unique filtering framework that minimizes filtering error through a recursive least squares algorithm. We demonstrate our method working with a commercial deferred shading engine for rasterization and with our own OpenGL deferred shading renderer. We have implemented our method in GPU and it has shown significant reduction of temporal flicker in very challenging scenarios including foliage rendering, complex non-linear camera motions, dynamic lighting, reflections, shadows and fine geometric details. Our approach, based on PHLM, avoids the creation of visible ghosting artifacts and it reduces the filtering overblur characteristic of temporal deflickering methods. At the same time, the results are comparable to state-of-the-art realtime filters in terms of temporal coherence.
Animation of models often introduces distortions to their parameterisation, as the parameterisation has been optimised for a single frame. When mapping textures or displacements on a deforming surface with such a constant parameterisation, distortions manifest visually as texture-mapped features appearing uniformly elastic, and such behaviour is not always desired. In this paper we introduce a real-time technique that reduces such parameterisation distortions in areas specified in a provided distortion control (rigidity) map. The parameter space is warped in an axis-aligned way to minimise a non-linear distortion metric using a hybrid CPU-GPU solver. We also extend the technique to compute arbitrary warps for handling more complex use cases. The result is real-time dynamic content-aware texturing that reduces distortions in a controlled way. The technique can be applied to reduce distortions in a variety of scenarios where highly detailed rigid features are represented on a map, abstracted from the underlying low-complexity deforming geometry they are mapped on. Such scenarios include reusing a low geometric complexity animated sequence with a multitude of detail maps, dynamic procedurally defined features mapped on deformable geometry and animation authoring previews on texturemapped models. RELATED WORKParameterisation distortions are typically minimised using mesh parameterisation algorithms [8,20,5]. The majority of such algo-
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.