The ability to render 3D data/scenes onto 2D displays have greatly enhanced the human ability to visualize complex scenes. However, when 3D information is projected in 2D, many of the natural depth cues are lost or are incorrect requiring a higher cognitive effort from the viewer to reconstruct the scene.
A light-field display projects perspective correct, full parallax 3D aerial imagery independent of the number of viewers or viewer positions. As such, light-field displays enable collaboration and promote the analysis of complex information; however, the generation of the synthetic light-field is computationally challenging with off-the-shelf GPUs. Light-field projection fidelity aside, the high Size, Weight and Power (SWaP) cost of light-field rendering ultimately limits the deployment of lightfield display systems. This is partly due to the issue that the modern GPUs and rendering APIs do not support multi-view rendering natively.
Light‐field displays provide a visual sense of presence by producing a full‐parallax three‐dimensional aerial and virtual image of portrayed subject matter that satisfies multiple depth cues and that can be engaged naturally and intuitively. This paper documents a comprehensive light‐field display system, including computation, photonics, and interaction system components, that is flexible and scalable, establishing a basis for application to both large‐scale collaborative and portable, mobile product architectures.
We present a technique to record and process a light field of an object in order to produce a printed holographic stereogram. We use a geometry correction process to maximize the depth of field and depth-dependent surface detail even when the array of viewpoints comprising the light field is coarsely sampled with respect to the angular resolution of the printed hologram. We capture the light field data of an object with a digital still camera attached to a 2D translation stage, and generate hogels (holographic elements) for printing by reprojecting the light field onto a photogrammetrically recovered model of the object and querying the relevant rays to be produced by the hologram with respect to this geometry. This results in a significantly clearer image of detail at different depths in the printed holographic stereogram.
Improved foveated rendering performance for head‐mounted displays (HMDs) requires re‐architecting a multi‐view render pipeline. This pipeline must manage common triangle culling/clipping operations across views while reordering triangle vs. view processing and varying the rasterization sampling rate. This paper explores render pipelines for HMD rendering improvement.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.