Single-image dehazing is an important problem because it is a key prerequisite for most high-level computer vision tasks. Traditional prior-based methods adopt priors generated from clear images to restrain the atmospheric scattering model and then recover haze-free images. However, these prior-based methods always encounter over-enhancement, such as halos and colour distortion. To solve this problem, many works use a convolutional neural network to retrieve original images. However, without priors as guidance, these learning-based methods dehaze effectively in synthetic datasets but perform poorly in real scenes. Hence, in this paper, we propose a prior-guided multiscale network for singleimage dehazing named PGMNet. Specifically, prior-based methods are adopted to acquire dehazed images of the training dataset in advance and then send these dehazed images to a parameter-shared encoder to form multiscale features. During the decoding process, these multiscale features are adopted to guide the prior-guided multiscale network to recover more image details. Moreover, considering that these prior-based dehazed images usually contain some over-enhanced regions, a spatial attention guided feature aggregation module and squeeze-and-excitation module are adopted to alleviate colour distortion. The proposed PGMNet takes the advantage of prior-based methods in real haze removal and provides superior performance compared with the state-of-the-art methods on both synthetic and real-world datasets.
Single image dehazing has become a key prerequisite for most high-level computer vision tasks since haze severely degrades the input images. Traditional prior-based methods dehaze images by some assumptions concluded from haze-free images, which recover high-quality results but always cause some halos or color distortion. Recently, many methods have been using convolutional neural networks to learn the haze-relevant features and then retrieve the original images. These learning-based methods achieve better performance in synthetic scenes but can hardly restore a clear image with discriminative texture when applied to real-world images, mainly because these networks are trained on synthetic datasets. To solve these problems, a self-modulated generative adversarial network for single image dehazing named SMGAN is proposed. The SMGAN inputs prior-dehazed images into a parameter-shared encoder to produce some latent information of these dehazed images. During the hazy image decoding process, the latent information is sent to self-modulated batch normalization layers, which makes the network fit in real haze removal. Moreover, consider that there are some over-enhanced regions in the guidance images, and a refine module is proposed to alleviate the negative information. The proposed SMGAN combines the advantages of prior-based methods and learning-based methods, which provides superior performance compared with the state-of-the-art methods on both synthetic and real-word datasets.
MotivationImage dehazing, as a key prerequisite of high-level computer vision tasks, has gained extensive attention in recent years. Traditional model-based methods acquire dehazed images via the atmospheric scattering model, which dehazed favorably but often causes artifacts due to the error of parameter estimation. By contrast, recent model-free methods directly restore dehazed images by building an end-to-end network, which achieves better color fidelity. To improve the dehazing effect, we combine the complementary merits of these two categories and propose a physical-model guided self-distillation network for single image dehazing named PMGSDN.Proposed methodFirst, we propose a novel attention guided feature extraction block (AGFEB) and build a deep feature extraction network by it. Second, we propose three early-exit branches and embed the dark channel prior information to the network to merge the merits of model-based methods and model-free methods, and then we adopt self-distillation to transfer the features from the deeper layers (perform as teacher) to shallow early-exit branches (perform as student) to improve the dehazing effect.ResultsFor I-HAZE and O-HAZE datasets, better than the other methods, the proposed method achieves the best values of PSNR and SSIM being 17.41dB, 0.813, 18.48dB, and 0.802. Moreover, for real-world images, the proposed method also obtains high quality dehazed results.ConclusionExperimental results on both synthetic and real-world images demonstrate that the proposed PMGSDN can effectively dehaze images, resulting in dehazed results with clear textures and good color fidelity.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.