2021
DOI: 10.1109/access.2021.3068534
|View full text |Cite
|
Sign up to set email alerts
|

MARN: Multi-Scale Attention Retinex Network for Low-Light Image Enhancement

Abstract: Images captured in low-light conditions often suffer from bad visibility, e.g., low contrast, lost details, and color distortion, and image enhancement methods can be used to improve the image quality. Previous methods have generally obtained a smooth illumination map to enhance the image but have ignored details, leading to inaccurate illumination estimations. To solve this problem, we propose a multiscale attention retinex network (MARN) for low-light image enhancement, which learns an image-toillumination m… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
2

Relationship

1
8

Authors

Journals

citations
Cited by 22 publications
(7 citation statements)
references
References 43 publications
0
7
0
Order By: Relevance
“…It can also process the color information and gray information in the image separately. In the application of image enhancement, it can achieve good color retention and enhancement performance at the same time [21,22]. HSI color model is completely different from the above two color models based on physics or process, which is -a perception based color model [23].…”
Section: Color Modelmentioning
confidence: 99%
“…It can also process the color information and gray information in the image separately. In the application of image enhancement, it can achieve good color retention and enhancement performance at the same time [21,22]. HSI color model is completely different from the above two color models based on physics or process, which is -a perception based color model [23].…”
Section: Color Modelmentioning
confidence: 99%
“…However, their performance is often unsatisfactory, and Lighten-Net [31] unnaturally brightens the center of the image, which varies from the ground truth. Furthermore, the definition of the ground-truth illumination and reflectance elements is not clear, which makes it difficult to guide the training process [33]. Recently, Zero-DCE [9] proposed incisive and simple nonlinear curve mapping with the help of non-reference loss functions; however, as the method only depends on the non-reference loss function, oversaturation of the colors can often be observed in the enhanced results.…”
Section: B Deep Learning-basedmentioning
confidence: 99%
“…x O y and σ y L are the variance-and covariance-predicted and ground-truth images, respectively. Constants c 1 and c 2 are used to prevent the denominator from being zero (c 1 = 0.0001, c 2 = 0.0009 is the same as that used in [33]). The SSIM loss function requires grayscale images.…”
Section: Loss Functionmentioning
confidence: 99%
“…In recent years, with the rapid development of machine learning, an increasing number of researchers have applied it to video image processing [ 11 , 12 , 13 , 14 , 15 ]. Based on the theory of traditional algorithms, some learning models have been created.…”
Section: Related Workmentioning
confidence: 99%