2020
DOI: 10.1109/lgrs.2019.2918955
|View full text |Cite
|
Sign up to set email alerts
|

A Human–Computer Fusion Framework for Aircraft Recognition in Remote Sensing Images

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 20 publications
0
5
0
Order By: Relevance
“…e calculation of F1-Score is shown in Journal of Electrical and Computer Engineering [35] Figure 5 in [35] 73. 16 33.5 UAV-YOLO [36] Figure 1 in [36] 74.68 30.12 RFN [37] ResNet-101 79.1 6.5 SigNMS [38] VGG-16 80.6 6.7 Improved-YOLOv3 [39] Figure 4 in [39] 86.42 25.8 MRFF-YOLO [40] Figure 5 in [40] 87. 16…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…e calculation of F1-Score is shown in Journal of Electrical and Computer Engineering [35] Figure 5 in [35] 73. 16 33.5 UAV-YOLO [36] Figure 1 in [36] 74.68 30.12 RFN [37] ResNet-101 79.1 6.5 SigNMS [38] VGG-16 80.6 6.7 Improved-YOLOv3 [39] Figure 4 in [39] 86.42 25.8 MRFF-YOLO [40] Figure 5 in [40] 87. 16…”
Section: Resultsmentioning
confidence: 99%
“…16 33.5 UAV-YOLO [36] Figure 1 in [36] 74.68 30.12 RFN [37] ResNet-101 79.1 6.5 SigNMS [38] VGG-16 80.6 6.7 Improved-YOLOv3 [39] Figure 4 in [39] 86.42 25.8 MRFF-YOLO [40] Figure 5 in [40] 87. 16…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Recall [102] Sensitivity [101] True Positive Rate (TPR) [101] Overall Accuracy [103] Detection Probability [68] Hit Rate [104] TP TP+FN PA for positives Precision [102] Positive Predictive Value (PPV) [101] TP TP+FP UA for positives Specificity [105] True Negative Rate (TNR) [101] TN TN+FP PA for negatives Negative Predictive Value (NPV) [101] TN TN+FN UA for negatives False Positive Rate (FPR) [106] Probability of False Detection [107] False Alarm Probability [100] FP TN+FP 1− (PA for negatives) False Negative Rate (FNR) 1 Missing Detection Probability [100] Missing Alarm [108] Misidentification Score [109] FN TP+FN 1− (PA for positives) False Discovery Rate (FDR) 1 False Alarm Probability [68] Commission Error [110] FP TP+FP 1− (UA for positives) Balanced Accuracy [101] Intersection-over-Union (IoU) [99] Jaccard Index [115] Figure 3 above summarizes the frequency at which each accuracy measure is used by papers that focus on binary and multiclass classification types, as well as by scene classification, object detection, semantic segmentation, and instance segmentation applications. A comparison of the graphs indicates that some measures (for example, precision and recall) are used for all types of classification applications, although it is notable that no single measure is used by every single study, even within one category of applications (e.g., multiclass scene identification).…”
Section: Overall Accuracymentioning
confidence: 99%
“…This is particularly important for metrics that are given general names such as average accuracy or accuracy. Lack of consistency in the literature as to what is meant by some measures is not just limited to F1, AP, and mAP, but includes other terms, such as false alarm rate/probability (e.g., compare [100,103]), reinforcing the importance of this issue.…”
Section: Clarity In Terminologymentioning
confidence: 99%