Abstract:In recent decades, haze has become an environmental issue due to its effects on human health. It also reduces visibility and degrades the performance of computer vision algorithms in autonomous driving applications, which may jeopardize car driving safety. Therefore, it is extremely important to instantly remove the haze effect on an image. The purpose of this study is to leverage useful modules to achieve a lightweight and real-time image-dehazing model. Based on the U-Net architecture, this study integrates … Show more
“…While existing nighttime dehazing algorithms, such as those proposed by Li and Chen et al [27,28], have made progress in addressing the challenges of nighttime dehazing, they still face difficulties in accurately estimating the transmittance of the light source region.This can cause an increase in the pixel value of the light source region after dehazing, subsequently resulting in the diffusion of the light source.To overcome this issue,we propose an adaptive light source matrix mechanism based on the work of Yao et al [30],as presented in Eq. [3] below.…”
Section: Transmittance Enhancementmentioning
confidence: 98%
“…In Eq. [3],C x represents the photometric value of each pixel.|I x − A x | max is used as the threshold value to distinguish between the light source region and the nonlight source region.In addition,d(x, m) epresents the distance from other pixels to the selected light source,k x is the impact matrix of each light source.…”
Section: Transmittance Enhancementmentioning
confidence: 99%
“…Haze is caused by tiny particles in the air,which make images unclear.Clear and sharp images are necessary for computer vision tasks like object detection [1][2][3][4],surveillance,and self-driving cars [5][6].Therefore,various image dehazing techniques have gained popularity.These techniques are essential to improve image acquisition systems and clear up images captured during hazy weather.…”
Due to various sources of interference in nighttime haze scenes,the resulting dehazed images are generally dim and dull,with diffuse illumination sources and a poor signal-to-noise ratio when compared with daytime haze scenes.In this paper,we propose a new method for nighttime dehazing using dark channel prior(DCP) enhancement.Firstly,to mitigate the effects of highlight sources and pseudo-light source regions on ambient illumination estimation,a hybrid processing method is introduced combines side window box filtering and fast edge preserving filtering to pre-process nighttime haze images.The ambient illumination estimate of the image is derived based on the DCP theory.Secondly,the adaptive light source matrix mechanism is used to fuse and enhance the transmit-tance of light source and non-light source regions to further improve the initial transmittance problem of the image.After processing,the transmittance map is compensated by gamma correction for the light source.Finally,we replace the atmospheric scattering model to obtain a better dehazed image.The nighttime dehazing method was compared with other image dehazing methods,and significant improvements were found in both subjective and objective evaluations.The obtained images are more in line with the visual characteristics of human eyes,and the evaluation indexes such as PSNR,SSIM and NIQE have been improved.
“…While existing nighttime dehazing algorithms, such as those proposed by Li and Chen et al [27,28], have made progress in addressing the challenges of nighttime dehazing, they still face difficulties in accurately estimating the transmittance of the light source region.This can cause an increase in the pixel value of the light source region after dehazing, subsequently resulting in the diffusion of the light source.To overcome this issue,we propose an adaptive light source matrix mechanism based on the work of Yao et al [30],as presented in Eq. [3] below.…”
Section: Transmittance Enhancementmentioning
confidence: 98%
“…In Eq. [3],C x represents the photometric value of each pixel.|I x − A x | max is used as the threshold value to distinguish between the light source region and the nonlight source region.In addition,d(x, m) epresents the distance from other pixels to the selected light source,k x is the impact matrix of each light source.…”
Section: Transmittance Enhancementmentioning
confidence: 99%
“…Haze is caused by tiny particles in the air,which make images unclear.Clear and sharp images are necessary for computer vision tasks like object detection [1][2][3][4],surveillance,and self-driving cars [5][6].Therefore,various image dehazing techniques have gained popularity.These techniques are essential to improve image acquisition systems and clear up images captured during hazy weather.…”
Due to various sources of interference in nighttime haze scenes,the resulting dehazed images are generally dim and dull,with diffuse illumination sources and a poor signal-to-noise ratio when compared with daytime haze scenes.In this paper,we propose a new method for nighttime dehazing using dark channel prior(DCP) enhancement.Firstly,to mitigate the effects of highlight sources and pseudo-light source regions on ambient illumination estimation,a hybrid processing method is introduced combines side window box filtering and fast edge preserving filtering to pre-process nighttime haze images.The ambient illumination estimate of the image is derived based on the DCP theory.Secondly,the adaptive light source matrix mechanism is used to fuse and enhance the transmit-tance of light source and non-light source regions to further improve the initial transmittance problem of the image.After processing,the transmittance map is compensated by gamma correction for the light source.Finally,we replace the atmospheric scattering model to obtain a better dehazed image.The nighttime dehazing method was compared with other image dehazing methods,and significant improvements were found in both subjective and objective evaluations.The obtained images are more in line with the visual characteristics of human eyes,and the evaluation indexes such as PSNR,SSIM and NIQE have been improved.
“…Other works, such as [ 18 , 19 , 20 , 21 , 22 , 23 , 24 , 25 ], implemented what is known as Inception-like blocks, which use deep conventional networks, and the residual connections introduced in [ 26 ] to extract the outputs from each layer and concatenated them for the output, as shown in the example in Figure 4 , mimicking the operation using the Inception layer depth-wise by allowing the extracted feature at multiple receptive fields to be processed at the output layer. However, while this approach can be suitable for certain applications such as classification, a drop in the spatial features accumulates as we move deeper, diminishing the spatial accuracy of the larger LRF values, as illustrated in Figure 5 , where it can be seen that a bias towards features at the centre starts to increase, impairing the capability of the layer to accurately position where the feature is located and decreasing its efficiency in applications such as object detection.…”
Section: Width-based Layer Design (Inception and Inception-like Appro...mentioning
As the pixel resolution of imaging equipment has grown larger, the images’ sizes and the number of pixels used to represent objects in images have increased accordingly, exposing an issue when dealing with larger images using the traditional deep learning models and methods, as they typically employ mechanisms such as increasing the models’ depth, which, while suitable for applications that have to be spatially invariant, such as image classification, causes issues for applications that relies on the location of the different features within the images such as object localization and change detection. This paper proposes an adaptive convolutional kernels layer (AKL) as an architecture that adjusts dynamically to images’ sizes in order to extract comparable spectral information from images of different sizes, improving the features’ spatial resolution without sacrificing the local receptive field (LRF) for various image applications, specifically those that are sensitive to objects and features locations, using the definition of Fourier transform and the relation between spectral analysis and convolution kernels. The proposed method is then tested using a Monte Carlo simulation to evaluate its performance in spectral information coverage across images of various sizes, validating its ability to maintain coverage of a ratio of the spectral domain with a variation of around 20% of the desired coverage ratio. Finally, the AKL is validated for various image applications compared to other architectures such as Inception and VGG, demonstrating its capability to match Inception v4 in image classification applications, and outperforms it as images grow larger, up to a 30% increase in accuracy in object localization for the same number of parameters.
Images routinely suffer from quality degradation in fog, mist, and other harsh weather conditions. Consequently, image dehazing is an essential and inevitable pre-processing step in computer vision tasks. Image quality enhancement for special scenes, especially nighttime image dehazing is extremely well studied for unmanned driving and nighttime surveillance, while the vast majority of dehazing algorithms in the past were only applicable to daytime conditions. After observing a large number of nighttime images, artificial light sources have replaced the position of the sun in daytime images and the impact of light sources on pixels varies with distance. This paper proposed a novel nighttime dehazing method using the light source influence matrix. The luminosity map can well express the photometric difference value of the picture light source. Then, the light source influence matrix is calculated to divide the image into near light source region and non-near light source region. Using the result of two regions, the two initial transmittances obtained by dark channel prior are fused by edge-preserving filtering. For the atmospheric light term, the initial atmospheric light value is corrected by the light source influence matrix. Finally, the final result is obtained by substituting the atmospheric light model. Theoretical analysis and comparative experiments verify the performance of the proposed image dehazing method. In terms of PSNR, SSIM, and UQI, this method improves 9.4%, 11.2%, and 3.3% over the existed night-time defogging method OSPF. In the future, we will explore the work from static picture dehazing to real-time video stream dehazing detection and will be used in detection on potential applications.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.