2021
DOI: 10.1109/tip.2020.3045636
|View full text |Cite
|
Sign up to set email alerts
|

A Global-Local Self-Adaptive Network for Drone-View Object Detection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
35
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 116 publications
(59 citation statements)
references
References 36 publications
0
35
0
Order By: Relevance
“…Compared to DREN [44] and ResNet101, the AP is increased by 4.5% with ResNet-50 backbone and 2.4% with ResNet-101 backbone on UAVDT dataset. Compared with GLSAN [8], which uses extra super-resolution network for enlarging regions, our method outperforms it by 5.49% of AP on VisDrone2019. The performance of the coarse-to-fine methods is limited by the initial detection, which would cause the bias for region generation (i.e., small objects are easily missed in coarse-level detection).…”
Section: Comparison With State-of-the-art Modelsmentioning
confidence: 96%
See 2 more Smart Citations
“…Compared to DREN [44] and ResNet101, the AP is increased by 4.5% with ResNet-50 backbone and 2.4% with ResNet-101 backbone on UAVDT dataset. Compared with GLSAN [8], which uses extra super-resolution network for enlarging regions, our method outperforms it by 5.49% of AP on VisDrone2019. The performance of the coarse-to-fine methods is limited by the initial detection, which would cause the bias for region generation (i.e., small objects are easily missed in coarse-level detection).…”
Section: Comparison With State-of-the-art Modelsmentioning
confidence: 96%
“…On UAVDT dataset, we achieve 22.4% of AP and 38.6% of AP 50 with Cascade R-CNN as the detector. Compared to GLSAN [8] with Cascade R-CNN, the AP is increased by 3.4% and AP 50 is increased by 8.1%. On Visdrone dataset, the AdaZoom achieves 40.33% and AP 50 of 66.94%, outperforming the SOTA performance by a large margin.…”
Section: Comparison With State-of-the-art Modelsmentioning
confidence: 98%
See 1 more Smart Citation
“…Reference [ 36 ] proposed a depthwise separable attention-guided network (DAGN), which integrated the feature series with a concentration block to make sure that the model is able to brilliantly differentiate significant and trivial features. Reference [ 37 ] integrated the overall and partial fusion strategy with a progressive network with varying scales to fullfill detection in a more accurate manner. In [ 7 ], an anchor-free method was introduced.…”
Section: Related Workmentioning
confidence: 99%
“…Yang et al 2019), DMNet(Li et al 2020a), GLSAN(Deng et al 2021), and DREN (Zhang et al 2019). Results on VisDrone.…”
mentioning
confidence: 99%