2022
DOI: 10.1609/aaai.v36i2.20046
|View full text |Cite
|
Sign up to set email alerts
|

Semantically Contrastive Learning for Low-Light Image Enhancement

Abstract: Low-light image enhancement (LLE) remains challenging due to the unfavorable prevailing low-contrast and weak-visibility problems of single RGB images. In this paper, we respond to the intriguing learning-related question -- if leveraging both accessible unpaired over/underexposed images and high-level semantic guidance, can improve the performance of cutting-edge LLE models? Here, we propose an effective semantically contrastive learning paradigm for LLE (namely SCL-LLE). Beyond the existing LLE wisdom, it ca… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
23
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 62 publications
(23 citation statements)
references
References 31 publications
0
23
0
Order By: Relevance
“…Comparison approaches: We carefully selected 12 state-of-the-art approaches as comparison methods for validating the superiority of this FDMLNet for light enhancement. These selected methods contained three traditional methods, i.e., LR3M [ 18 ], simultaneous reflection and illumination estimation (SRIE) [ 19 ], and the bioinspired multiexposure fusion framework (BIMEF) [ 20 ]; seven supervised-learning-based methods, i.e., RetinexNet [ 54 ], deep stacked Laplacian restorer (DSLR) [ 49 ], KinD [ 28 ], DLN [ 14 ], DRBN [ 59 ], SCL-LLE [ 52 ], and MIRNet [ 65 ]; an unsupervised-learning-based method, i.e., EnlightenGAN [ 29 ]; and a zero-reference-learning-based method, i.e., Zero DCE++ [ 1 ]. Notably, three traditional methods were coded in Matlab and the other eight comparison methods were coded in Python and Pytorch.…”
Section: Experimental Results and Analysismentioning
confidence: 99%
See 2 more Smart Citations
“…Comparison approaches: We carefully selected 12 state-of-the-art approaches as comparison methods for validating the superiority of this FDMLNet for light enhancement. These selected methods contained three traditional methods, i.e., LR3M [ 18 ], simultaneous reflection and illumination estimation (SRIE) [ 19 ], and the bioinspired multiexposure fusion framework (BIMEF) [ 20 ]; seven supervised-learning-based methods, i.e., RetinexNet [ 54 ], deep stacked Laplacian restorer (DSLR) [ 49 ], KinD [ 28 ], DLN [ 14 ], DRBN [ 59 ], SCL-LLE [ 52 ], and MIRNet [ 65 ]; an unsupervised-learning-based method, i.e., EnlightenGAN [ 29 ]; and a zero-reference-learning-based method, i.e., Zero DCE++ [ 1 ]. Notably, three traditional methods were coded in Matlab and the other eight comparison methods were coded in Python and Pytorch.…”
Section: Experimental Results and Analysismentioning
confidence: 99%
“…KinD [ 28 ] failed to recover the inherent details and introduced unsatisfactory color casts in local dark regions of the image. SCL-LLE [ 52 ] generated undesired images with an unnatural visual experience (observed in picture g in Figure 10 ). MIRNet [ 52 ] succeeded in improving the image brightness, but the enhanced images exhibited a color deviation and low contrast.…”
Section: Experimental Results and Analysismentioning
confidence: 99%
See 1 more Smart Citation
“…Few existing methods can deal with the degraded images by low-and high-level vision interaction [15,17,18,23,24]. These methods usually incorporate the high-level tasks into the overall framework, providing more image priors and further benefiting degraded image restoration.…”
Section: Low-level and High-level Vision Interactionmentioning
confidence: 99%
“…These methods usually incorporate the high-level tasks into the overall framework, providing more image priors and further benefiting degraded image restoration. For example, SCL-LLE [17] proposes an effective semantically contrastive learning framework for low-light image enhancement, which embeds the semantic segmentation task into the enhancement model. As a result, the offered high-level ... semantic knowledge can be used to guide the illumination enhancement process.…”
Section: Low-level and High-level Vision Interactionmentioning
confidence: 99%