Image dehazing deals with the removal of undesired loss of visibility in outdoor images due to the presence of fog. Retinex is a color vision model mimicking the ability of the Human Visual System to robustly discount varying illuminations when observing a scene under different spectral lighting conditions. Retinex has been widely explored in the computer vision literature for image enhancement and other related tasks. While these two problems are apparently unrelated, the goal of this work is to show that they can be connected by a simple linear relationship. Specifically, most Retinex-based algorithms have the characteristic feature of always increasing image brightness, which turns them into ideal candidates for effective image dehazing by directly applying Retinex to a hazy image whose intensities have been inverted. In this paper, we give theoretical proof that Retinex on inverted intensities is a solution to the image dehazing problem. Comprehensive qualitative and quantitative results indicate that several classical and modern implementations of Retinex can be transformed into competing image dehazing algorithms performing on pair with more complex fog removal methods, and can overcome some of the main challenges associated with this problem.
Abstract. Images obtained under adverse weather conditions, such as haze or fog, typically exhibit low contrast and faded colors, which may severely limit the visibility within the scene. Unveiling the image structure under the haze layer and recovering vivid colors out of a single image remains a challenging task, since the degradation is depth-dependent and conventional methods are unable to overcome this problem. In this work, we extend a well-known perception-inspired variational framework for single image dehazing. Two main improvements are proposed. First, we replace the value used by the framework for the grey-world hypothesis by an estimation of the mean of the clean image. Second, we add a set of new terms to the energy functional for maximizing the inter-channel contrast. Experimental results show that the proposed Enhanced Variational Image Dehazing (EVID) method outperforms other state-of-the-art methods both qualitatively and quantitatively. In particular, when the illuminant is uneven, our EVID method is the only one that recovers realistic colors, avoiding the appearance of strong chromatic artifacts.
Gamut mapping transforms the colors of an input image to the colors of a target device so as to exploit the full potential of the rendering device in terms of color rendition. In this paper we present spatial gamut mapping algorithms that rely on a perceptually-based variational framework. Our algorithms adapt a well-known image energy functional whose minimization leads to image enhancement and contrast modification. We show how by varying the importance of the contrast term in the image functional we are able to perform gamut reduction and gamut extension. We propose an iterative scheme that allows our algorithms to successfully map the colors from the gamut of the original image to a given destination gamut while keeping the perceived colors close to the original image. Both subjective and objective evaluations validate the promising results achieved via our proposed algorithms.
Abstract-We propose a novel image dehazing technique based on the minimization of two energy functionals and a fusion scheme to combine the output of both optimizations. The proposed Fusion-based Variational Image Dehazing (FVID) method is a spatially varying image enhancement process that first minimizes a previously proposed variational formulation that maximizes contrast and saturation on the hazy input. The iterates produced by this minimization are kept, and a second energy that shrinks faster intensity values of well-contrasted regions is minimized, allowing to generate a set of Difference-of-Saturations (DiffSat) maps by observing the shrinking rate. The iterates produced in the first minimization are then fused with these DiffSat maps to produce a haze-free version of the degraded input. The FVID method does not rely on a physical model from which to estimate a depth map, nor it needs a training stage on a database of human-labeled examples. Experimental results on a wide set of hazy images demonstrate that FVID better preserves the image structure on close-by regions that are less affected by fog, and it successfully compares with other current methods in the task of removing haze degradation from far-away regions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.