2020
DOI: 10.48550/arxiv.2010.01695
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

MetaDetect: Uncertainty Quantification and Prediction Quality Estimates for Object Detection

Abstract: In object detection with deep neural networks, the boxwise objectness score tends to be overconfident, sometimes even indicating high confidence in presence of inaccurate predictions. Hence, the reliability of the prediction and therefore reliable uncertainties are of highest interest. In this work, we present a post processing method that for any given neural network provides predictive uncertainty estimates and quality estimates. These estimates are learned by a post processing model that receives as input a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
10
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
3

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(10 citation statements)
references
References 25 publications
(65 reference statements)
0
10
0
Order By: Relevance
“…Other performance metrics: Prediction of other performance metrics (e.g., segmentation quality, intersection over union) have also been studied in the literature. [14] proposes meta-regression to predict the intersection over union (IoU), and also classifies true and false positives. [15] predicts when the per-frame mAP drops below a critical threshold.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Other performance metrics: Prediction of other performance metrics (e.g., segmentation quality, intersection over union) have also been studied in the literature. [14] proposes meta-regression to predict the intersection over union (IoU), and also classifies true and false positives. [15] predicts when the per-frame mAP drops below a critical threshold.…”
Section: Related Workmentioning
confidence: 99%
“…Additionally, to assess estimation quality, prior works are mostly restricted to variations of Expected Calibration Error (ECE), which as we discuss in Section 4.1, may not be suitable for certain applications. Works specific to object detection [12], [13], [14], [15] focus on estimating the uncertainty of the location or scale of a given detected object, but cannot evaluate the image as a whole, such as how many objects were missed (false negatives) which is needed to predict per-image metrics F1 score or recall. Perhaps more importantly, these works provide point solutions and do not address the growing need for general-purpose methodologies that can be effortlessly adapted to new problems of interest.…”
Section: Introductionmentioning
confidence: 99%
“…Similarly, uncertainty measures of epistemic and aleatoric kind were obtained in [25]. A novel approach to predictive uncertainty estimation in object detection was presented in [19]. The approach is specific to anchor-based object detection architectures, where NMS is used to filter different output boxes indicating the same object instance.…”
Section: Related Workmentioning
confidence: 99%
“…The new method we propose here implements gradient-based uncertainty metrics for three modern object detection architectures [2,3,4]. We use the obtained metrics for predicting detection quality and compare them in this discipline with the score as well as MC dropout and the output-based framework proposed in [19]. We investigate how these methods can be combined by aggregation for a more comprehensive estimation of prediction uncertainty.…”
Section: Introductionmentioning
confidence: 99%
“…In computer vision, uncertainty is taken into account in variety of applications such as image classification [151,152], segmentation [83,153], camera relocalization [154], object detection [155,156,157], image/video retrieval (restoration) [158,159], in the setting of Bayesian and ensemble learning. Image classification and segmentation are among the most popular applications of DL models.…”
Section: Computer Visionmentioning
confidence: 99%