2022
DOI: 10.1007/s11042-022-11940-1
|View full text |Cite
|
Sign up to set email alerts
|

CE-FPN: enhancing channel information for object detection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
34
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 102 publications
(46 citation statements)
references
References 50 publications
0
34
0
Order By: Relevance
“…FPN structure [22] has an aliasing effect in cross-scale fusion. Multi-scale fusion and cross-layer connection are widely used to improve model performance.…”
Section: E Attention Detection Head Modulementioning
confidence: 99%
“…FPN structure [22] has an aliasing effect in cross-scale fusion. Multi-scale fusion and cross-layer connection are widely used to improve model performance.…”
Section: E Attention Detection Head Modulementioning
confidence: 99%
“…Widely used feature fusion and feature enhancement methods include the integration of feature maps extracted by different convolution layers(e.g., D-DETR [9], NAS-FPN [10], and Qu [11]) and rich semantic information of feature maps through semantic segmentation branching and global activation module(DES [12] and FCOS [13]). CE-FPN [14], D-DETR [9], and AugFPN [15] are aimed at the problem of low detection accuracy of multi-scale objects. CBNet [16] integrates high-resolution and low-resolution features from different backbone networks.…”
Section: A Feature Blendingmentioning
confidence: 99%
“…Many scholars have used the FPN [17] as the core to build object detection models, including Ren [18], to combine the FPN and SSD. Inspired by the subpixel FCOS [13] to obtain more semantic information, AugFPN [15] makes the most use of multi-scale features, NAS-FPN [10] fuses features from different regions, and CE-FPN [14] outputs information across scales.…”
Section: A Feature Blendingmentioning
confidence: 99%
“…However, such methods may require repeated computation of features, resulting in higher computation and memory requirements. In addition, multi-scale feature fusion [ 20 , 21 ] enriches difficulty discerning object feature representations by integrating deep and shallow features while adding less computational cost. The other line of effort aims to expand the receptive field using stacking atrous convolutions with different atrous rates or convolutional filters with different sizes [ 22 , 23 ], which is also an effective way to improve object detection performance.…”
Section: Introductionmentioning
confidence: 99%