2022
DOI: 10.1007/s11760-022-02252-w
|View full text |Cite
|
Sign up to set email alerts
|

ECANet: enhanced context aggregation network for single image dehazing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 16 publications
(6 citation statements)
references
References 20 publications
0
4
0
Order By: Relevance
“…Motivated by the works of ECA Net [ 43 ], adaptive attention is integrated into the FPN network using the Efficient Channel Attention structure which is shown in Figure 6 . Firstly, the feature layer (FL0) is extracted from the backbone network of the YOLO-ECA or fused by FPN.…”
Section: Methodsmentioning
confidence: 99%
“…Motivated by the works of ECA Net [ 43 ], adaptive attention is integrated into the FPN network using the Efficient Channel Attention structure which is shown in Figure 6 . Firstly, the feature layer (FL0) is extracted from the backbone network of the YOLO-ECA or fused by FPN.…”
Section: Methodsmentioning
confidence: 99%
“…Another method ECANet [15] aggregates both local and global features by a designed feature fusion block. Moreover, GFN [16] adopts the white balance (WB), contrast enhancing (CE), and gamma correction to derive three subimages from a hazy image, and directly achieve dehazing by the fusion of the subimages. Differently, GridDehazeNet [12] builds a grid dehazing network, which enhances the feature flow between different scales and depths.…”
Section: Model-free Methodsmentioning
confidence: 99%
“…Even if the channel characteristics are compressed, the parameter quantity is still proportional to the square of the number of channels [24]. For a reduced computational burden, a one-dimensional convolution with convolution kernel length k is used to achieve local cross-channel interaction by referring to the idea of ECANet, aiming to extract the dependency between channels [25]. L-CAM represents the improved lightweight channel attention module, and the convolution kernel length k is calculated by Formula (1):…”
Section: Yolo Algorithm Improvementmentioning
confidence: 99%