Underwater images suffer from low visibility and contrast caused by absorption and scattering, which leads to haze and some further limitations. The existing underwater single image dehazing methods cannot achieve a balance between the performance and computational complexity, and are difficult to produce satisfactory results in the regions with large distance. To overcome these problems, we propose a new underwater single image dehazing method, which includes an improved background light estimation based on the quad-tree subdivision iteration algorithm, and a novel transmission estimation method. For the background light estimation, we introduce a robust score for each region of the image, which can evaluate the region from both smoothness and color. For the transmission estimation, we propose the color space dimensionality reduction prior (CSDRP), which allows conversing an image from the three-dimensional RGB color space to a 2D color space, namely the UV color space. In the UV color space, by clustering the pixels into mounts of haze-lines and carefully setting the haze-free boundary, the transmission map can be figured out and used to produce an excellent dehazed image. Experimental results show that our method has competitive effects compared with mainstream underwater single image dehazing methods. INDEX TERMS Underwater image dehazing, contrast enhancement, image enhancement, scattering removal.
Absorption and scattering in aqueous media would attenuate light and make imaging difficult. Therefore, an artificial light source is usually utilized to assist imaging in the deep ocean. However, the artificial light source typically alters the light conditions to a large extent, resulting in the non-uniform illumination of images. To solve this problem, we propose a non-uniform illumination correction algorithm based on a fully convolutional network for underwater images. The proposed algorithm model the original image as the addition of the ideal image and a non-uniform light layer. We replace the traditional pooling layer with dilated convolution to expand the receptive field and achieve higher accuracy in non-uniform illumination recognition. To improve the perception ability of the network effectively, the original image and parameters which pre-trained on the ImageNet are concentrated. The concentrated information is used as input to the network. Due to the color shift and blurred details of the underwater image, we design the novel loss function, which includes three parts of feature loss, smooth loss, and adversarial loss. Moreover, we built a dataset of the underwater image with non-uniform illumination. Experiments show that our method performs better in subjective assessment and objective assessment than some traditional methods.
Underwater images captured by optical cameras can be degraded by light attenuation and scattering, which leads to deteriorated visual image quality. The technique of underwater image enhancement plays an important role in a wide range of subsequent applications such as image segmentation and object detection. To address this issue, we propose an underwater image enhancement framework which consists of an adaptive color restoration module and a haze-line based dehazing module. First, we employ an adaptive color restoration method to compensate the deteriorated color channels and restore the colors. The color restoration module consists of three steps: background light estimation, color recognition, and color compensation. The background light estimation determines the image is blueish or greenish, and the compensation is applied in red-green or red-blue channels. Second, the haze-line technique is employed to remove the haze and enhance the image details. Experimental results show that the proposed method can restore the color and remove the haze at the same time, and it also outperforms several state-of-the-art methods on three publicly available datasets. Moreover, experiments on an underwater object detection dataset show that the proposed underwater image enhancement method is able to improve the accuracy of the subsequent underwater object detection framework.
Imaging through the wavy air–water surface suffers from severe geometric distortions, which are caused by the light refraction effect that affects the normal operations of underwater exploration equipment such as the autonomous underwater vehicle (AUV). In this paper, we propose a deep learning-based framework, namely the self-attention generative adversarial network (SAGAN), to remove the geometric distortions and restore the distorted image captured through the water–air surface. First, a K-means-based image pre-selection method is employed to acquire a less distorted image that preserves much useful information from an image sequence. Second, an improved generative adversarial network (GAN) is trained to translate the distorted image into the non-distorted image. During this process, the attention mechanism and the weighted training objective are adopted in our GAN framework to get the high-quality restored results of distorted underwater images. The network is able to restore the colors and fine details in the distorted images by combining the three objective losses, i.e., the content loss, the adversarial loss, and the perceptual loss. Experimental results show that our proposed method outperforms other state-of-the-art methods on the validation set and our sea trial set.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.