In daylight viewing conditions, image contrast is often significantly degraded by atmospheric aerosols such as haze and fog. This paper introduces a method for reducing this degradation in situations in which the scene geometry is known. Contrast is lost because light is scattered toward the sensor by the aerosol particles and because the light reflected by the terrain is attenuated by the aerosol. This degradation is approximately characterized by a simple, physically based model with three parameters. The method involves two steps: first, an inverse problem is solved in order to recover the three model parameters; then, for each pixel, the relative contributions of scattered and reflected flux are estimated. The estimated scatter contribution is simply subtracted from the pixel value and the remainder is scaled to compensate for aerosol attenuation. This paper describes the image processing algorithm and presents an analysis of the signal-to-noise ratio (SNR) in the resulting enhanced image. This analysis shows that the SNR decreases exponentially with range. A temporal filter structure is proposed to solve this problem. Results are presented for two image sequences taken from an airborne camera in hazy conditions and one sequence in clear conditions. A satisfactory agreement between the model and the experimental data is shown for the haze conditions. A significant improvement in image quality is demonstrated when using the contrast enhancement algorithm in conjuction with a temporal filter.
When an aircraft is flying in conditions of low level mist or cloud, visibility of terrain features from the cockpit may be low. However, if image enhancement techniques are applied to a sequence of images (captured at 25Hz with a cockpit-mounted camera) , the effective visibility of terrain features can be increased. The main sources of image degradation are sensor noise and the scattering and attenuation of light by haze and fog. We present a two-stage algorithm that reduces such degradation through temporal processing.The first stage involves motion-compensated temporal averaging of a set of consecutive images. The frame-toframe visual motion is calculated using navigational information, a model of the camera, and a database of terrain elevations. Since this motion is predicted independently of image content, it is unaffected by the degradation. The temporal averaging of a sequence of images produces an averaged image with a decreased level of sensor noise.The second stage reverses the loss of contrast caused by the atmosphere. The total light detected by the camera is the sum of the light scattered from the terrain and that scattered from the atmospheric particles, both of which are functions of the distance from the camera to the terrain. By considering the relationship between depth and brightness, a parametric model for the total light detected is proposed. The parameters of this model provide the means to subtract the component of light due to atmospheric scattering and to then scale the result to compensate for the attenuation of the light reflected from the terrain.The algorithm has been applied to forward-looking image sequences captured in both good and poor visibility conditions. The results show a considerable increase in effective visibility over the unprocessed images.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.