We present an appearance-based virtual view generation method that allows viewers to fly through a real dynamic scene. The scene is captured by multiple synchronized cameras. Arbitrary views are generated by interpolating two original camera-views near the given viewpoint. The quality of the generated synthetic view is determined by the precision, consistency and density of correspondences between the two images. All or most of previous work that uses interpolation extracts the correspondences from these two images. However, not only is it difficult to do so reliably (the task requires a good stereo algorithm), but also the two images alone sometimes do not have enough information, due to problems such as occlusion. Instead, we take advantage of the fact that we have many views, from which we can extract much more reliable and comprehensive three-dimensional (3-D) geometry of the scene as a 3-D model. Dense and precise correspondences between the two images, to be used for interpolation, are obtained using this constructed 3-D model. Pseudo correspondences are even obtained for regions occluded in one of the cameras and then we used to correctly interpolate between the two images. Our method of 3-D modeling from multiple images uses the Multiple Baseline Stereo method and the Shape from Silhouette method. The virtual view sequences are presented for demonstrating the performance of the virtual view generation in the 3-D Room. Index Terms-Image based rendering, model based rendering, multibaseline stereo, multiple-view images, shape from silhouette, 3-D model. I. INTRODUCTION M ETHODS for three-dimensional (3-D) shape reconstruction from multiple-view images have recently received significant research, mainly because of advances in computation power and data handling capacity. Research in 3-D shape reconstruction from multiple-view images has conventionally been applied in robot vision and machine vision systems, in which the reconstructed 3-D shape is used for recognizing the real scene structure and object shape. For those kinds of applications, the 3-D shape itself is the goal of the reconstruction. New applications of 3-D shape reconstruction have recently been introduced [26], [29]. One of such application is arbitrary view generation from multiple-view images, in which the new views are generated by rendering pixel values of input images Manuscript
In this paper, we present an "appearance-based" virtual view generation method for temporally-varying events taken by multiple cameras of the "3D Room", developed by our group. With this method, we can generate images from any virtual view point between two selected real views. The virtual appearance view generation method is based on simple interpolation between two selected views. The correspondence between the views are automatically generated from the multiple images by use of the volumetric model shape reconstruction framework. Since the correspondences are obtained by the recovered volumetric model, even occluded regions in the views can be correctly interpolated in the virtual view images. The virtual view image sequences are presented for demonstrating the performance of the virtual view image generation in the 3D Room. 9 cameras on the ceiling 10 cameras on each wall (40 in total)
When implementing a phase-code multiplexing method in real holographic storage systems, there are several factors of cross-talk noise which degrade signal-to-noise ratio (SNR). To reduce such problems, we developed a new and practical phase-code multiplexing method using a 2-dimensional high-resolution phase-only spatial light modulator (P-SLM) which encodes a reference beam with orthogonal phase codes. Theoretically and experimentally, we found that removing a particular all 0 phase code which is always generated by Walsh-Hadamard matrices is quite effective in reducing the cross-talk noise. Moreover, we found that periodically tiled phase-code patterns reduce the cross-talk noise due to inhomogeneous illumination of the P-SLM in real systems. By combining these methods, we achieved bit error rates (BER) for multiplexed holograms on the order of 10-4–10-3 without error correction.
Several important hologram reconstruction parameters of shift multiplexing with a randomly phase encoded reference beam were investigated as functions of the reference beam average speckle size at the medium. Those are shift selectivity, reference beam defocus margin, and medium tilt margin. It was confirmed that these parameters are linearly dependent on the average speckle size in certain directions. The exception was the in-track direction medium tilt margin which is dominated by the Bragg matching condition. We found that smaller average speckle size is preferable in view of achieving higher recording density, while larger average speckle size is desirable in view of having wider playback margins. Based on the experimental results, the multiplexing of 300 holograms per unit area was demonstrated with a polymer medium with sufficiently low bit error rates.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.