2021
DOI: 10.1016/j.image.2021.116250
|View full text |Cite
|
Sign up to set email alerts
|

UIEC^2-Net: CNN-based underwater image enhancement using two color space

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
37
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 147 publications
(59 citation statements)
references
References 32 publications
0
37
0
Order By: Relevance
“…The above mentioned networks still reflect a considerable rate of failure due to lack of handling the saturation and contrast problems. In 23 , an end-to-end CNN with three blocks named RGB (for basic operations), HSV (saturation and luminance adjustment) and attention (stage for quality enhancement) is proposed where the final restored image is produced by a weighted sum between the RGB block output and the attention block’s RGB component as well as a weighted sum between HSV block output and the attention block’s HSV component. The input of the attention block is a concatenation of the raw image and the images of other two blocks (RGB block and HSV block).…”
Section: Introductionmentioning
confidence: 99%
“…The above mentioned networks still reflect a considerable rate of failure due to lack of handling the saturation and contrast problems. In 23 , an end-to-end CNN with three blocks named RGB (for basic operations), HSV (saturation and luminance adjustment) and attention (stage for quality enhancement) is proposed where the final restored image is produced by a weighted sum between the RGB block output and the attention block’s RGB component as well as a weighted sum between HSV block output and the attention block’s HSV component. The input of the attention block is a concatenation of the raw image and the images of other two blocks (RGB block and HSV block).…”
Section: Introductionmentioning
confidence: 99%
“…Based on this aspect, images were divided into low-light and shallow-water images, as shown in Figures 6a and 7a. To illustrate the effect of the proposed network on different types of underwater images better, inspired by [11], the test images were divided into five types: bluish underwater images, greenish underwater images, yellowish underwater images, low-illuminated underwater images, and shallow-water images, as shown in Figures 3-7, respectively. The ULAP mostly relied on underwater imaging models and prior knowledge, which made it less robust to complex scenes and even aggravated the effect of the color cast.…”
Section: Comparisons With Sota Methods On Full-reference Datasetsmentioning
confidence: 99%
“…Comparison Methods: WE-Net was compared with eight methods, including two traditional methods (UDCP [3] and ULAP [45]), a residual-network-based method (UResnet [8]), a shallow-network-based method (shallow-UWnet [9]), a color-balancebased method (UIEC2Net [11]), a physical model and CNN-fusion-based method (Chen et al [46]), a multi-stage method (Deep-WaveNet [10]), and a DWT-based method (Ma et al [17]).…”
Section: Implementation Detailsmentioning
confidence: 99%
See 1 more Smart Citation
“…We use the SSIM loss [36] to impose the structure and texture similarity on the predicted image. In this paper, we use gray images, which convert from RGB images, to compute SSIM score, and for each pixel the SSIM value is computed within a 11 × 11 image patch around the pixel.…”
Section: Ssim Lossmentioning
confidence: 99%