Abstract:A new method to correct the barrel distortion of an electronic endoscope image is presented. A correction model assuming circularly symmetric distortion is introduced with the following model parameters: the center of distortion and the coefficients of polynomials representing the distortion correction in the radial direction. If the imaging system is distortion-free, straight lines in the object space should be imaged as straight lines. Based on this criterion, a distorted image of a standard pattern consisti… Show more
“…Such models were for example applied for endoscope calibration, see [17,204,454,565]. Other calibration approaches using such models are [8,42,134,148,215,216,274,302,330,381,436,437,514,515,521,525,534].…”
A printed and bound version is available at a special discount price of US35 from Now Publishers. This can be obtained by entering the promotional code CGV006023 on https://www.nowpublishers.com/bookorder.aspx?doi=0600000023&product=CGVInternational audienceThis survey is mainly motivated by the increased availability and use of panoramic image acquisition devices, in computer vision and various of its applications. Different technologies and different computational models thereof exist and algorithms and theoretical studies for geometric computer vision ("structure-from-motion") are often re-developed without highlighting common underlying principles. One of the goals of this survey is to give an overview of image acquisition methods used in computer vision and especially, of the vast number of camera models that have been proposed and investigated over the years, where we try to point out similarities between different models. Results on epipolar and multi-view geometry for different camera models are reviewed as well as various calibration and self-calibration approaches, with an emphasis on non-perspective cameras. We finally describe what we consider are fundamental building blocks for geometric computer vision or structure-from-motion: epipolar geometry, pose and motion estimation, 3D scene modeling, and bundle adjustment. The main goal here is to highlight the main principles of these, which are independent of specific camera models
“…Such models were for example applied for endoscope calibration, see [17,204,454,565]. Other calibration approaches using such models are [8,42,134,148,215,216,274,302,330,381,436,437,514,515,521,525,534].…”
A printed and bound version is available at a special discount price of US35 from Now Publishers. This can be obtained by entering the promotional code CGV006023 on https://www.nowpublishers.com/bookorder.aspx?doi=0600000023&product=CGVInternational audienceThis survey is mainly motivated by the increased availability and use of panoramic image acquisition devices, in computer vision and various of its applications. Different technologies and different computational models thereof exist and algorithms and theoretical studies for geometric computer vision ("structure-from-motion") are often re-developed without highlighting common underlying principles. One of the goals of this survey is to give an overview of image acquisition methods used in computer vision and especially, of the vast number of camera models that have been proposed and investigated over the years, where we try to point out similarities between different models. Results on epipolar and multi-view geometry for different camera models are reviewed as well as various calibration and self-calibration approaches, with an emphasis on non-perspective cameras. We finally describe what we consider are fundamental building blocks for geometric computer vision or structure-from-motion: epipolar geometry, pose and motion estimation, 3D scene modeling, and bundle adjustment. The main goal here is to highlight the main principles of these, which are independent of specific camera models
“…The barrel distortion is corrected by the method proposed by Haneishi et al 32 The magnification at various distances from the borescope is calibrated in water. The borescope focus is fixed at 5 mm from the borescope tip.…”
Direct imaging is a technique commonly used in the study of particle, bubble, and droplet size distribution in a dynamic system. Objects such as particles, bubbles, and droplets can be present at various distances from the imaging device when images are captured. Hence, the location of the object will need to be known in order to determine the actual size of an individual object. However, the location of the object cannot be determined from a single image. A single calibration scale defined at the focusing plane is normally used for the determination of all the object sizes from images. When the focus is close to the imaging device, the change in magnification with location is large. The size distribution obtained from the use of a single calibration scale would thus give a considerable deviation from the actual size distribution. In this study, a statistical method is proposed to reconstruct the actual object size distribution from the experimental object size distribution obtained from images using a single calibration scale defined at the focusing plane. Experiments are performed to validate the accuracy of the proposed method on the particle size distribution determination in a settling system. The stability of the proposed method is also analyzed theoretically for imaging devices with different depth-of-field (DOF), focusing location, and change in magnification with distance.
“…One can then use least squares analysis to estimate the mapping parameters for performing image correction. [23][24][25][26][27][28] Since the mapping of object to image is in principle a purely radial function, the mapping parameters are the image coordinates of the optical axis ðx 0 c ; y 0 c Þ together with the a i coefficients used to express the radial form of the mapping from image plane ðx 0 ; y 0 Þ to object plane ðx; yÞ:…”
Section: Lens Tolerancing Manufacture and Testingmentioning
Abstract. We present a foveated miniature endoscopic lens implemented by amplifying the optical distortion of the lens. The resulting system provides a high-resolution region in the central field of view and low resolution in the outer fields, such that a standard imaging fiber bundle can provide both the high resolution needed to determine tissue health and the wide field of view needed to determine the location within the inspected organ. Our proof of concept device achieves 7 ∼ 8 μm resolution in the fovea and an overall field of view of 4.6 mm. Example images and videos show the foveated lens' capabilities.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.