2021 IEEE International Conference on Robotics and Automation (ICRA) 2021
DOI: 10.1109/icra48506.2021.9561483
|View full text |Cite
|
Sign up to set email alerts
|

Fast Uncertainty Quantification for Deep Object Pose Estimation

Abstract: Deep learning-based object pose estimators are often unreliable and overconfident especially when the input image is outside the training domain, for instance, with sim2real transfer. Efficient and robust uncertainty quantification (UQ) in pose estimators is critically needed in many robotic tasks. In this work, we propose a simple, efficient, and plug-and-play UQ method for 6-DoF object pose estimation. We ensemble 2-3 pre-trained models with different neural network architectures and/or training data sources… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
16
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 17 publications
(16 citation statements)
references
References 34 publications
0
16
0
Order By: Relevance
“…The estimated distribution was propagated with a particle filter. Shi et al [21] proposed the ensemble-based uncertainty quantification. In [6], the authors regressed the object orientation and its uncertainty based on the Bingham distribution.…”
Section: Object Pose Uncertainty Estimationmentioning
confidence: 99%
“…The estimated distribution was propagated with a particle filter. Shi et al [21] proposed the ensemble-based uncertainty quantification. In [6], the authors regressed the object orientation and its uncertainty based on the Bingham distribution.…”
Section: Object Pose Uncertainty Estimationmentioning
confidence: 99%
“…To facilitate transfer, related work also leverage important state information such as object pose, which is readily available in simulation but difficult to obtain in the real world [2], [13]- [17]. While this approach is valid, it is limited by the accuracy of pose estimation algorithms which typically require a well calibrated system and/or reference models [3], [17], [25], [26]. In comparison, our approach learns directly from raw images and transfers to a real robot with uncalibrated cameras.…”
Section: Related Workmentioning
confidence: 99%
“…However, solving complex precision-based manipulation tasks in an end-to-end fashion remains very challenging, and current methods often rely on important state information that is difficult to obtain in the real world [2], [13]- [16], and/or rely on known camera intrinsics and meticulous calibration [3], [17] in order to transfer from simulation to the real world. We argue that for a system to be truly robust, it should be able to operate from visual feedback, succeed without the need for camera calibration, and be flexible enough to tolerate task variations, much like humans.…”
Section: Introductionmentioning
confidence: 99%
“…In (Lakshminarayanan et al, 2017), the best results were obtained with a negative log-likelihood loss function and virtual adversarial training. With regard to the uncertainty estimation in deep learning 6D object pose estimation, (Shi et al, 2021) used a small ensemble of pose estimation models to get the uncertainty of the predicted object poses.…”
Section: Related Workmentioning
confidence: 99%
“…Recent computer-vision-based 6D object pose estimation approaches, which achieve state-of-the-art results on benchmark datasets, use deep learning models such as convolutional neural networks (CNN) to obtain the object pose (Hodan et al, 2020). However, it was observed that these models do not perform well for changes in the input data and are therefore difficult to use in mission-critical applications (Shi et al, 2021, Amodei et al, 2016, Loquercio et al, 2020. This motivates the desire to use uncertainty quantification (UQ) methods for image based regression tasks with convolutional neural networks.…”
Section: Introductionmentioning
confidence: 99%