OCEANS 2016 MTS/IEEE Monterey 2016
DOI: 10.1109/oceans.2016.7761342
|View full text |Cite
|
Sign up to set email alerts
|

Estimation of ambient light and transmission map with common convolutional architecture

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
61
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 56 publications
(61 citation statements)
references
References 17 publications
0
61
0
Order By: Relevance
“…Deep learning techniques in the context of underwater robots have been mainly used with an opposite goal in mind: underwater color correction. In [37] a CNN (Convolutional Neural Network) is trained to estimate the ambient light and thus to dehaze the image. A more similar approach to style transfer is faced in [38] where the cross-domain relations between air and underwater images are learnt.…”
Section: Style Transfermentioning
confidence: 99%
See 1 more Smart Citation
“…Deep learning techniques in the context of underwater robots have been mainly used with an opposite goal in mind: underwater color correction. In [37] a CNN (Convolutional Neural Network) is trained to estimate the ambient light and thus to dehaze the image. A more similar approach to style transfer is faced in [38] where the cross-domain relations between air and underwater images are learnt.…”
Section: Style Transfermentioning
confidence: 99%
“…RGHS (Relative Global Histogram Stretching) [45] and RD (Rayleigh Distribution) [46] perform both contrast and color correction. The recent advances in deep learning have also been applied for both restoration and enhancement techniques [37,38].…”
Section: Underwater Image Enhancementmentioning
confidence: 99%
“…On the other hand, underwater image restoration methods aim to recover clear images by exploiting the optical imaging model. The most important task for restoration is to estimate two key model parameters, i.e., transmission and ambient light, which are usually estimated either by prior-based approaches [6,[18][19][20][21][22][23][24][25] or by learning-based approaches [26][27][28][29][30]. The prior-based approaches heavily depend on the reliability of certain prior information, such as dark channel prior [18][19][20][21], red channel prior [6], haze-line prior [25] and so on.…”
Section: Introductionmentioning
confidence: 99%
“…Thus, a mismatch between the adopted prior and the target scene may incur significant estimation error, and consequently recover distorted results [23]. By contrast, the learning-based approaches aim to obtain more robust and accurate estimation by exploring the relations between the underwater images and the corresponding parameters in a data-driven manner, such as [26][27][28][29][30]. To this end, it is essential to have a suitable training dataset and an efficient neural network that can be trained to learn such relations.…”
Section: Introductionmentioning
confidence: 99%
“…The priors are directly computed or trained earlier; however, the assumption of the priors easily breaks on a real robot operating situations, such as smoke regions and underwater. Shin, Cho, Pandey, and Kim () proposed a CNN‐based method for the estimation of a colored ambient light and transmission map. Also, Li, Skinner, Eustice, and Johnson‐Roberson () presented waterGAN, which is used for generating haze images.…”
Section: Introductionmentioning
confidence: 99%