2021
DOI: 10.3390/electronics10030279
|View full text |Cite
|
Sign up to set email alerts
|

A Comparative Analysis of Object Detection Metrics with a Companion Open-Source Toolkit

Abstract: Recent outstanding results of supervised object detection in competitions and challenges are often associated with specific metrics and datasets. The evaluation of such methods applied in different contexts have increased the demand for annotated datasets. Annotation tools represent the location and size of objects in distinct formats, leading to a lack of consensus on the representation. Such a scenario often complicates the comparison of object detection methods. This work alleviates this problem along the f… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
198
0
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 410 publications
(277 citation statements)
references
References 83 publications
0
198
0
1
Order By: Relevance
“…This led to the overdetection of background objects and insects. In comparison, the threshold of confidence score for an object detection task by the YOLO series was often set to a value that ranged from 0.3 to 0.5 (Padilla et al, 2021; Ovchinnikova et al, 2021; Redmon et al, 2016; Redmon and Farhadi, 2018).…”
Section: Methodsmentioning
confidence: 99%
“…This led to the overdetection of background objects and insects. In comparison, the threshold of confidence score for an object detection task by the YOLO series was often set to a value that ranged from 0.3 to 0.5 (Padilla et al, 2021; Ovchinnikova et al, 2021; Redmon et al, 2016; Redmon and Farhadi, 2018).…”
Section: Methodsmentioning
confidence: 99%
“…The mean average precision, which is defined in (4), is the result of averaging the AP for each class. AP and mAP has several forms of calculation [31]:…”
Section: Evaluation Metricsmentioning
confidence: 99%
“…The performance of an object detection model can be assessed in terms of precision and recall [ 89 ], where , , and are the numbers of true positive, false positive, and false negative detections. Intuitively, precision P measures the accuracy of assigning the correct class label, while recall R measures the accuracy of finding ground truth objects.…”
Section: Honeybee Detectionmentioning
confidence: 99%
“…One can roughly assess the performance of an object detector by computing the area under the precision–recall curve. To estimate the area under the curve, the average precision uses N -point interpolation [ 89 ], …”
Section: Honeybee Detectionmentioning
confidence: 99%
See 1 more Smart Citation