2021
DOI: 10.48550/arxiv.2106.14501
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

R2RNet: Low-light Image Enhancement via Real-low to Real-normal Network

Abstract: Images captured in weak illumination conditions will seriously degrade the image quality. Solving a series of degradation of low-light images can effectively improve the visual quality of the image and the performance of high-level visual tasks. In this paper, we propose a novel Real-low to Real-normal Network for low-light image enhancement, dubbed R2RNet, based on the Retinex theory, which includes three subnets: a Decom-Net, a Denoise-Net, and a Relight-Net. These three subnets are used for decomposing, den… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 51 publications
(45 reference statements)
0
3
0
Order By: Relevance
“…In qualitative evaluation, we tested on datasets from different scenarios, including the MIT dataset [46] and the LSRW dataset [47]. In the quantitative evaluation, we tested on the five datasets most commonly used for super resolution, including Set5, Set14, B100, Urban100, and Manga109.…”
Section: ) Datasets and Metricsmentioning
confidence: 99%
“…In qualitative evaluation, we tested on datasets from different scenarios, including the MIT dataset [46] and the LSRW dataset [47]. In the quantitative evaluation, we tested on the five datasets most commonly used for super resolution, including Set5, Set14, B100, Urban100, and Manga109.…”
Section: ) Datasets and Metricsmentioning
confidence: 99%
“…The illumination loss includes the perceptual loss, SSIM loss, and L1 loss. VGG-16 [33] is a classic deep convolutional neural network that has been trained on large-scale image classification tasks. Using VGG-16 as a loss function enables network learning to generate enhanced images with higher perceptual quality.…”
Section: Irecovery-netmentioning
confidence: 99%
“…Benchmarks Description and Metrics. As for low-light image enhancement, we randomly sampled 100 images from MIT dataset [2] and 50 testing image from LSRW dataset [9] for testing. We used two full-reference metrics including PSNR and SSIM, five no-reference metrics including DE [20], EME [1], LOE [23] and NIQE [23].…”
Section: Implementation Detailsmentioning
confidence: 99%