We previously proposed to recombine subpixels across elemental images to triple the spatial resolution of integral imaging light field displays; however, the sampling errors of subpixels force us to waive a portion of subpixels, resulting in a reduced angular resolution. In this study, the sampling errors of all subpixels are demonstrated to be zero under a specific system configuration; thus, no angular resolution is lost with a tripled spatial resolution.
We propose a driving algorithm for field sequential color (FSC) LCDs with a mini‐LED backlight. The algorithm adapts to image content through deep learning‐based image classification performed in each image segment, ensuring minimized color breakup in all segments. The mini‐LED FSC‐LCD having low color breakup for arbitrary images well satisfies applications needing high resolution and low power consumption.
Integral imaging light field displays (InIm-LFDs) can provide realistic 3D images by showing an elemental image array (EIA) under a lens array. However, it is always challenging to computationally generate an EIA in real-time with entry-level computing hardware because the current practice that projects many viewpoints to the EIA induces heavy computations. This study discards the viewpoint-based strategy and revisits the early point retracing rendering method, and then proposes that InIm-LFDs and regular 2D displays share two similar signal processing phases: sampling and reconstructing. An InIm-LFD is demonstrated to create a finite number of static voxels for signal sampling. And each voxel is invariantly formed by homogeneous pixels for signal reconstructing. We obtain the static voxel-pixel mapping through arbitrarily accurate raytracing in advance and store it as a lookup table (LUT). Our EIA rendering method first resamples input 3D data with the pre-defined voxels and then assigns every voxel’s value to its homogeneous pixels through the LUT. As s result, the proposed method reduces the computational complexity by several orders of magnitude. The experimental rendering speed is as fast as 7 ms for a full-HD EIA frame on an entry-level laptop. Finally, considering a voxel may not be perfectly integrated by its homogeneous pixels, called the sampling error, the proposed and conventional viewpoint-based methods are analyzed in the Fourier domain. We prove that even with severe sampling errors, the two methods negligibly differ in the output signal’s frequency spectrum. The proposed method breaks the long-standing tradeoff among rendering speed, accuracy, and system complexity for computer-generated integral imaging (CGII), expected to remove the hinder for real-time CGII.
This study proposes a vision‐correcting near‐eye light field display, which computationally manipulates sampling rays according to an eye's refractive error. Besides myopia and hyperopia, the proposed method can correct astigmatism and high‐order aberrations without any hardware modification. Implementation and experimental verification are provided by taking a severely astigmatic eye as an example.
A real‐time elemental image generation method with 90FPS on an ordinary PC is proposed for integral imaging light field display, which does not sacrifice accuracy nor requires high‐performance hardware. The method pre‐calculates all available voxels and stores the invariable mapping between each voxel and its homogeneous pixels in a lookup table.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.