We present an approach to easily remove the effects of haze from images.
We present an approach for easily removing the effects of haze from passively acquired images. Our approach is based on the fact that usually the natural illuminating light scattered by atmospheric particles (airlight) is partially polarized. Optical filtering alone cannot remove the haze effects, except in restricted situations. Our method, however, stems from physics-based analysis that works under a wide range of atmospheric and viewing conditions, even if the polarization is low. The approach does not rely on specific scattering models such as Rayleigh scattering and does not rely on the knowledge of illumination directions. It can be used with as few as two images taken through a polarizer at different orientations. As a byproduct, the method yields a range map of the scene, which enables scene rendering as if imaged from different viewpoints. It also yields information about the atmospheric particles. We present experimental results of complete dehazing of outdoor scenes, in far-from-ideal conditions for polarization filtering. We obtain a great improvement of scene contrast and correction of color.
Abstract-Underwater imaging is important for scientific research and technology as well as for popular activities, yet it is plagued by poor visibility conditions. In this paper, we present a computer vision approach that removes degradation effects in underwater vision. We analyze the physical effects of visibility degradation. It is shown that the main degradation effects can be associated with partial polarization of light. Then, an algorithm is presented, which inverts the image formation process for recovering good visibility in images of scenes. The algorithm is based on a couple of images taken through a polarizer at different orientations. As a by-product, a distance map of the scene is also derived. In addition, this paper analyzes the noise sensitivity of the recovery. We successfully demonstrated our approach in experiments conducted in the sea. Great improvements of scene contrast and color correction were obtained, nearly doubling the underwater visibility range.Index Terms-Color, illumination, image enhancement, inverse problems, polarized light, scattering, three-dimensional reconstruction, undersea vision, underwater imaging. I. UNDERWATER VISION UNDERWATER vision is plagued by poor visibility conditions [1]- [6]. According to [7], most computer vision methods (e.g., those based on stereo triangulation or on structure from motion) cannot be employed directly underwater. This is due to the particularly challenging environmental conditions that complicate image matching and analysis. It is important to alleviate these visibility problems since underwater imaging is widely used in scientific research and technology. Computer vision methods are being used in this mode of imaging for various applications [5] What makes underwater imaging so problematic? To understand the challenge, consider Fig. 1. which shows an underwater archaeological site about 2.5-m deep. It is easy to see that visibility degradation effects vary as distances to objects increase [3], [28]. Since objects in the field of view (FOV) are at different distances from the camera, the causes for image degradation are spatially varying. This situation is analogous to open-air vision in bad weather (fog or haze) described in [29]-[34]. Contrary to this fact, traditional image enhancement tools, e.g., high pass filtering and histogram equalization, are typically spatially invariant. Since they do not model the spatially varying distance dependencies, traditional methods are of limited utility in countering visibility problems, as has been demonstrated in past experiments [33], [35] as well as in this paper.In this paper, we develop a physics-based approach for recovery of visibility when imaging underwater scenes in natural illumination. Since it is based on the models of image formation, the approach automatically accounts for dependencies on object distance and estimates a distance map of the scene as a by-product. The approach is fast and relies on raw images taken through different states of a polarizing filter. 1 These raw images have...
Abstract-Imaging in scattering media such as fog and water is important but challenging. Images suffer from poor visibility due to backscattering and signal attenuation. Most prior methods for scene recovery use active illumination scanners (structured and gated), which can be slow and cumbersome. On the other hand, natural illumination is inapplicable to dark environments. The current paper addresses the need for a non-scanning recovery method, that uses active scene irradiance. We study the formation of images under widefield artificial illumination. Based on the formation model, the paper presents an approach for recovering the object signal. It also yields rough information about the 3D scene structure. The approach can work with compact, simple hardware, having active widefield, polychromatic polarized illumination. The camera is fitted with a polarization analyzer. Two frames of the scene are instantly taken, with different states of the analyzer or light-source polarizer. A recovery algorithm follows the acquisition. It allows both the backscatter and the object reflection to be partially polarized. It thus unifies and generalizes prior polarization-based methods, which had assumed exclusive polarization of either of these components. The approach is limited to an effective range, due to image noise and falloff of widefield illumination. Thus, these limits and the noise sensitivity are analyzed. The approach particularly applies underwater. We therefore use the approach to demonstrate recovery of object signals and significant visibility enhancement in underwater field experiments.
The accuracy of depth estimation based on defocus effects has been essentially limited by the depth of field of the imaging system. We show that depth estimation can be improved significantly relative to classical methods by exploiting three-dimensional diffraction effects. We formulate the problem by using information theory analysis and present, to the best of our knowledge, a new paradigm for depth estimation based on spatially rotating point-spread functions (PSFs). Such PSFs are fundamentally more sensitive to defocus thanks to their first-order axial variation. Our system acquires a frame by using a rotating PSF and jointly processes it with an image acquired by using a standard PSF to recover depth information. Analytical, numerical, and experimental evidence suggest that the approach is suitable for applications such as microscopy and machine vision. © 2006 Optical Society of America OCIS codes: 110.6880, 110.4850, 100.6640, 150.5670. The human visual system uses defocus as a depth cue. 1 Optical images convey three-dimensional (3D) information by the amount of blur in each image region: the further the object is from the in-focus plane, the more blurred it appears. This principle is exploited in techniques known as depth from defocus (DFD) by jointly processing frames acquired in different focus or aperture settings. [1][2][3][4][5][6][7] Relative to stereovision, DFD is more robust to occlusion and correspondence problems. 8 Moreover, in applications that require a large numerical aperture (NA), particularly in high-magnification microscopy, DFD is more suitable than stereovision. Previous DFD work has concentrated on the implementation of signal processing algorithms based on a geometrical optical model. Typical systems have utilized a clear, circular aperture as is found in standard camera lenses. 1-7 However, the point-spread function (PSF) of such systems has not been optimized for depth estimation. Therefore in this Letter we engineer the PSF to achieve enhanced performance in this specific task. We exploit the freedom provided by diffractive optics to design unconventional optical responses. In particular, we investigate 3D PSFs whose transverse cross sections rotate with respect to each other as a result of diffraction in free space. 9-14 Rotating PSFs provide a faster rate of change with depth than PSFs of clear pupil systems having the same NA. 9 As a consequence, we show here that rotating PSFs present approximately an order of magnitude increase in Fisher information (FI) along the depth dimension, when compared with standard pupils. Finally, we demonstrate this principle in an experiment based on a two-channel system that encodes a rotating PSF.The more dissimilar the PSF is at different values of defocus, the easier it is to distinguish between depth planes in the presence of noise. Defocus is typically quantified by the defocus parameter , defined as 15where is the wavelength of light, and z obj focus and z obj Ј are the in-focus and actual object distances from the entrance pupil, re...
People and animals fuse auditory and visual information to obtain robust perception. A particular benefit of such cross-modal analysis is the ability to localize visual events associated with sound sources. We aim to achieve this using computer-vision aided by a single microphone. Past efforts encountered problems stemming from the huge gap between the dimensions involved and the available data. This has led to solutions suffering from low spatio-temporal resolutions. We present a rigorous analysis of the fundamental problems associated with this task. Then, we present a stable and robust algorithm which overcomes past deficiencies. It grasps dynamic audio-visual events with high spatial resolution, and derives a unique solution. The algorithm effectively detects pixels that are associated with the sound, while filtering out other dynamic pixels. It is based on canonical correlation analysis (CCA), where we remove inherent ill-posedness by exploiting the typical spatial sparsity of audio-visual events. The algorithm is simple and efficient thanks to its reliance on linear programming and is free of user-defined parameters. To quantitatively assess the performance, we devise a localization criterion. The algorithm capabilities were demonstrated in experiments, where it overcame substantial visual distractions and audio noise.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.