2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2020
DOI: 10.1109/cvprw50498.2020.00234
|View full text |Cite
|
Sign up to set email alerts
|

NonLocal Channel Attention for NonHomogeneous Image Dehazing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
9
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
3
2

Relationship

3
5

Authors

Journals

citations
Cited by 23 publications
(9 citation statements)
references
References 42 publications
0
9
0
Order By: Relevance
“…1) and AMIDR-Net (Fig. 3) are similar in terms of the general architecture which is inspired by U-Net [41,31]. An encoder is shared by three bottlenecks followed by four decoders.…”
Section: Network Architecturementioning
confidence: 99%
See 2 more Smart Citations
“…1) and AMIDR-Net (Fig. 3) are similar in terms of the general architecture which is inspired by U-Net [41,31]. An encoder is shared by three bottlenecks followed by four decoders.…”
Section: Network Architecturementioning
confidence: 99%
“…Table 2 details the structure of decoders. Each decoder includes four levels of cascaded attention module (Squeeze and excitation [17] or dilation inception modules [30]), a dense transitional block, and two residual blocks. It means that there…”
Section: Network Architecturementioning
confidence: 99%
See 1 more Smart Citation
“…The method is based on two proposed models following similar ideology [43], the 'AtJw' and the 'AtJwD' models. As illustrated in Fig.…”
Section: Ipal-nonlocalmentioning
confidence: 99%
“…Recently, Convolutional Neural Networks (CNN) have proven their capability in extracting better features that ease the following step of classification and detection. This has been empirically proven in various fields, such as in object classification [45,27], object detection [25,71] and inverse image problems such as dehazing [58,99], denoising [51,73], HDR estimation [52,59,11] ... etc. Having said that, learning techniques typically require a huge amount of data for training and/or regularization of the training such as [7,97,3,69,8].…”
Section: Introductionmentioning
confidence: 99%