A novel method of estimating the parameters of a lenticular lens array is proposed. A single display image with a stripe pattern is used to estimate the slanted angle and pitch of a lenticular lens. The lens parameters can be derived from the observed pattern parameters of a captured image. Experiments with a simulated data set result in slanted angle and pitch estimation errors of 0.0077° and 0.0002 mm, respectively. The proposed method is also robust to image noise. We verify the performance of our method, which reduces crosstalk when applied to a conventional multi-view display.
Abstract— Techniques for 3‐D display have evolved from stereoscopic 3‐D systems to multiview 3‐D systems, which provide images corresponding to different viewpoints. Currently, new technology is required for application in multiview display systems that use input‐source formats such as 2‐D images to generate virtual‐view images of multiple viewpoints. Due to the changes in viewpoints, occlusion regions of the original image become disoccluded, resulting in problems related to the restoration of output image information that is not contained in the input image. In this paper, a method for generating multiview images through a two‐step process is proposed: (1) depth‐map refinement and (2) disoccluded‐area estimation and restoration. The first step, depth‐map processing, removes depth‐map noise, compensates for mismatches between RGB and depth, and preserves the boundaries and object shapes. The second step, disoccluded‐area estimation and restoration, predicts the disoccluded area by using disparity and restores information about the area by using information about neighboring frames that are most similar to the occlusion area. Finally, multiview rendering generates virtual‐view images by using a directional rendering algorithm with boundary blending.
Rapid developments in 3D display technologies have enabled consumers to enjoy 3D environments in an increasingly immersive manner through various display systems such as stereoscopic, multiview, and light field displays. However, there is a corresponding increase in the complexity of the conventional multiview rendering process in the attempt to achieve a sufficient level of reality, which may hinder the further commercial viability of 3D display products based on such a conventional approach. This paper proposes a novel method, the so-called direct light field rendering, which can compose the display 3D panel image without reconstructing all the multiview images beforehand. Interpreting the 3D display as sampling in the light field domain, we attempt to directly compute only the necessary samples, not the entire light fields or multiview images. Our proposed algorithm involves the solving of linear systems of two variables, thereby requiring remarkably low computational complexities. Experimental results show that the computation time and memory usage remain as little as 12% and 1% of those required by the conventional one.Keywordsautostereoscopic display, depth image based rendering, light field rendering.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.