“…and DSLR [48] did not effectively enhance the darker regions, such as regions that depict faces. NPE [53], Retinex-Net [12], LLNet [17], TBEFN [47], and KinD [50] produce different degrees of artefacts, speckles, colour changes and even amplify noise.…”
Section: Qualitative Comparisonsmentioning
confidence: 86%
“…According to different learning strategies, the deep learning methods used for image enhancement can be divided into supervised learning, reinforcement learning [39], unsupervised learning [40], zero-shot learning [41][42][43], and semi-supervised learning [44]. Supervised learning can be further divided into end-to-end methods [17,[45][46][47][48], deep Retinex-based methods [12, 18 49, 50], and data-driven methods [13][14][15][16]. We found that supervised learning is the mainstream deep learning method used for low-light image enhancement, because paired training data and various low-/normal-light image synthesis methods are publicly available.…”
Section: Low-light Image Enhancementmentioning
confidence: 99%
“…We compare CSAN with 12 state-of-the-art methods including conventional methods: NPE [53] and LIME [54], and CNNbase methods: Retinex-Net [12], LLNet [17], EnlightenGAN [40], ExCNet [41], Zero-DCE [43], DRBN [44], MBLLEN [45], TBEFN [47], DSLR [48], and KinD [50]. The results were reproduced by using publicly available source codes with the recommended parameters.…”
Low-light enhancement is a crucial task in computer vision because of the limited dynamic range of digital imaging devices in poor lighting conditions. Images taken under low-light conditions often suffer from insufficient brightness and severe noise. At present, many models based on convolutional neural networks have been proposed to enhance low-light images. However, most models treat the features on different channels equally, which is not conducive to models learning hierarchical features. Consequently, the method proposed a channel splitting attention network (CSAN) that divides the shallow features into two branches, the residual and dense branches, transmitting different information. Residual branching facilitates feature reuse, while dense branching promotes the exploration of new features. In addition, CSAN uses merge-and-run mappings to assist information integration between different branches and distinguishes the information contained in different branch features through an attention module designed in this paper. Multiple experiment results show that the method proposed is superior to state-of-the-art methods in qualitative and quantitative evaluation. Furthermore, CSAN can better suppress chromaticity aberration while enhancing low-light images.
“…and DSLR [48] did not effectively enhance the darker regions, such as regions that depict faces. NPE [53], Retinex-Net [12], LLNet [17], TBEFN [47], and KinD [50] produce different degrees of artefacts, speckles, colour changes and even amplify noise.…”
Section: Qualitative Comparisonsmentioning
confidence: 86%
“…According to different learning strategies, the deep learning methods used for image enhancement can be divided into supervised learning, reinforcement learning [39], unsupervised learning [40], zero-shot learning [41][42][43], and semi-supervised learning [44]. Supervised learning can be further divided into end-to-end methods [17,[45][46][47][48], deep Retinex-based methods [12, 18 49, 50], and data-driven methods [13][14][15][16]. We found that supervised learning is the mainstream deep learning method used for low-light image enhancement, because paired training data and various low-/normal-light image synthesis methods are publicly available.…”
Section: Low-light Image Enhancementmentioning
confidence: 99%
“…We compare CSAN with 12 state-of-the-art methods including conventional methods: NPE [53] and LIME [54], and CNNbase methods: Retinex-Net [12], LLNet [17], EnlightenGAN [40], ExCNet [41], Zero-DCE [43], DRBN [44], MBLLEN [45], TBEFN [47], DSLR [48], and KinD [50]. The results were reproduced by using publicly available source codes with the recommended parameters.…”
Low-light enhancement is a crucial task in computer vision because of the limited dynamic range of digital imaging devices in poor lighting conditions. Images taken under low-light conditions often suffer from insufficient brightness and severe noise. At present, many models based on convolutional neural networks have been proposed to enhance low-light images. However, most models treat the features on different channels equally, which is not conducive to models learning hierarchical features. Consequently, the method proposed a channel splitting attention network (CSAN) that divides the shallow features into two branches, the residual and dense branches, transmitting different information. Residual branching facilitates feature reuse, while dense branching promotes the exploration of new features. In addition, CSAN uses merge-and-run mappings to assist information integration between different branches and distinguishes the information contained in different branch features through an attention module designed in this paper. Multiple experiment results show that the method proposed is superior to state-of-the-art methods in qualitative and quantitative evaluation. Furthermore, CSAN can better suppress chromaticity aberration while enhancing low-light images.
“…There are deep learning based methods that did not make use of Retinex theory. For example, Lore et al (2017) designed an autoencoder structure to learn a direct mapping from low-light image to the corresponding normal-light one; Lv et al(2018) built a multi-branch network for this task; Lim and Kim (2020) introduced Laplacian pyramid to a multi-scale structure for better feature extraction; Zheng et al (2021) presented an algorithm unrolling scheme mainly focusing on denoising.…”
Motivated by their recent advances, deep learning techniques have been widely applied to low-light image enhancement (LIE) problem. Among which, Retinex theory based ones, mostly following a decomposition-adjustment pipeline, have taken an important place due to its physical interpretation and promising performance. However, current investigations on Retinex based deep learning are still not sufficient, ignoring many useful experiences from traditional methods. Besides, the adjustment step is either performed with simple image processing techniques, or by complicated networks, both of which are unsatisfactory in practice. To address these issues, we propose a new deep learning framework for the LIE problem. The proposed framework contains a decomposition network inspired by algorithm unrolling, and adjustment networks considering both global brightness and local brightness sensitivity. By virtue of algorithm unrolling, both implicit priors learned from data and explicit priors borrowed from traditional methods can be embedded in the network, facilitate to better decomposition. Meanwhile, the consideration of global and local brightness can guide designing simple yet effective network modules for adjustment. Besides, to avoid manually parameter tuning, we also propose a self-supervised fine-tuning strategy, which can always guarantee a promising performance. Experiments on a series of typical LIE datasets demonstrated the effectiveness of the proposed method, both quantitatively and visually, as compared with existing methods.
“…Zhang et al [18] also designed an effective network based on the Retinex theory to enhance low-light images. Lim et al [19] proposed a deep-stacked Laplacian restorer (DSLR) to recover the global illumination and local details from the original input. Furthermore, some methods that are not based on the Retinex theory are also proposed.…”
Images captured in weak illumination conditions will seriously degrade the image quality. Solving a series of degradation of low-light images can effectively improve the visual quality of the image and the performance of high-level visual tasks. In this paper, we propose a novel Real-low to Real-normal Network for low-light image enhancement, dubbed R2RNet, based on the Retinex theory, which includes three subnets: a Decom-Net, a Denoise-Net, and a Relight-Net. These three subnets are used for decomposing, denoising, and contrast enhancement, respectively. Unlike most previous methods trained on synthetic images, we collect the first Large-Scale Real-World paired low/normal-light images dataset (LSRW dataset) for training. Our method can properly improve the contrast and suppress noise simultaneously. Extensive experiments on publicly available datasets demonstrate that our method outperforms the existing state-of-the-art methods by a large margin both quantitatively and visually. And we also show that the performance of the highlevel visual task (i.e. face detection) can be effectively improved by using the enhanced results obtained by our method in lowlight conditions. Our codes and the LSRW dataset are available at: https://github.com/abcdef2000/R2RNet.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.