A full view spherical camera exploits its extended field of view (FOV) to map its complete environment onto a 2D image plane. Thus, with a single shot, it delivers a lot more information about the surroundings than one can gather with a normal perspective or plenoptic camera, which are commonly used in light field imaging. However, in contrast to a light field camera, a spherical camera does not capture directional information about the incident light, and thus a single shot from a spherical camera is not sufficient to reconstruct 3D scene geometry.In this paper, we introduce a method combining spherical imaging with the light field approach. To obtain 3D information with a spherical camera, we capture several independent spherical images by applying a constant vertical offset between the camera positions and combine the images in a Spherical Light Field (SLF).Our approach differs from its related work in terms of expanded FOV and reduced acquisition time: Taguchi et al.[2] used an array of spherical mirrors to model catadioptric cameras for wide angle light field rendering, which implies decreasing tangential resolution close to the mirror borders and limits the FOV to 150 • × 150 • . Unger et al.[4] employed a fisheye-camera translated on a plane to capture hemispherical HDR images of a scene. The total acquisition time of up to 12 hours for a single scene restricts the application scenario to constantly illuminated indoor environments. Our proposed approach for SLF acquisition uses spherical cameras as shown in Figure 1(a) and allows to capture scenes within a few minutes, making it applicable to outdoor scenes.A convenient description of this camera type is provided by Torii et al. [3], who consider a spherical camera to consist of a camera center C with a surrounding unit sphere acting as projection surface. This definition implies that no intrinsic parameters such as focal length or distortion values known from perspective imaging need to be considered (Figure 1(b)). By applying the Mercator projection [1], the spherical image is conformally mapped to an image on a cylinder surface Π (Figure 1(c)) allowing for epipolar plane image (EPI) reconstruction.To describe a SLF, we define a new parametrization for the camera domain and the surrounding spherical 2D mapped image.We take the cylinder surface Π and denote the center line with Ω. The cylinder surface Π is parametrized by the image coordinates (φ , θ ) ∈ Π. The line Ω contains the focal points t ∈ Ω of all possible camera positions in vertical direction. A Spherical Light Field can then be described by a functionwhere L(t, φ , θ ) defines the intensity of the incident light ray on the image plane (φ , θ ) passing through the focal point t. To estimate the disparity, we address a 2D slice Σ φ * of the SLF by setting φ to a fixed value φ * . The restriction of the light field to such a slice defines an EPI, being formally given asAssuming a Lambertian scene, the EPI yields information about the disparity of a scene point in the form of orientated lines. To...
Light-field imaging is a research field with applicability in a variety of imaging areas including 3D cinema, entertainment, robotics, and any task requiring range estimation. In contrast to binocular or multi-view stereo approaches, capturing light fields means densely observing a target scene through a window of viewing directions. A principal benefit in light-field imaging for range computation is that one can eliminate the error-prone and computationally expensive process of establishing correspondence. The nearly continuous space of observation allows to compute highly accurate and dense depth maps free of matching. Here, we discuss how to structure the imaging system for optimal ranging over a defined volume -what we term a bounded frustum. We detail the process of designing the light-field setup, including practical issues such as camera footprint and component size influence the depth of field, lateral and range resolution. Both synthetic and real captured scenes are used to analyze the depth precision resulting from a design, and to show how unavoidable inaccuracies such as camera position and focal length variation limit depth precision. Finally, inaccuracies may be sufficiently well compensated through calibration and must be eliminated at the outset. Figure 1. Visualization of a cross-structure light field. Horizontal light-field EPIs are obtained by slicing horizontally through the image volume, while Vertical light-field EPIs are obtained by slicing vertically through the image volume.Videometrics, Range Imaging, and Applications XIII, edited by Fabio Remondino, Mark R. Shortis, Proc. of SPIE Vol. 9528, 952803 · © 2015 SPIE · CCC code: 0277-786X/15/$18 ·
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.