2019
DOI: 10.1007/s11263-019-01250-9
|View full text |Cite
|
Sign up to set email alerts
|

DeepIM: Deep Iterative Matching for 6D Pose Estimation

Abstract: Estimating 6D poses of objects from images is an important problem in various applications such as robot manipulation and virtual reality. While direct regression of images to object poses has limited accuracy, matching rendered images of an object against the input image can produce accurate results. In this work, we propose a novel deep neural network for 6D pose matching named DeepIM. Given an initial pose estimation, our network is able to iteratively refine the pose by matching the rendered image against … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
548
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 268 publications
(550 citation statements)
references
References 62 publications
2
548
0
Order By: Relevance
“…Our proposed approach also performs better than all ex- isting comparable methods when evaluated on the AD{D|I} metric. However, the DeepIM [11] pose refinement method outperforms our approach on this metric whereas ours perform better on the 2D-Reprojection metric. We investigated this issue which led to the findings that the LINEMOD dataset has many instances of noisy pose annotations due to registration errors between the RGB and the depth image because the pose annotation process was done using ICP on the depth images.…”
Section: Linemod Datasetmentioning
confidence: 74%
See 2 more Smart Citations
“…Our proposed approach also performs better than all ex- isting comparable methods when evaluated on the AD{D|I} metric. However, the DeepIM [11] pose refinement method outperforms our approach on this metric whereas ours perform better on the 2D-Reprojection metric. We investigated this issue which led to the findings that the LINEMOD dataset has many instances of noisy pose annotations due to registration errors between the RGB and the depth image because the pose annotation process was done using ICP on the depth images.…”
Section: Linemod Datasetmentioning
confidence: 74%
“…Pose refinement methods Recent deep learning solutions have also considered techniques for pose refinement from RGB images [11,15] as a way to bridge the gap between RGB and RGBD pose accuracies. DeepIM [11] uses a FlowNet backbone architecture to predict a relative SE(3) transformation to match the colored rendered image of an object using the initial pose to the observed image.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Similarly, we uniformly sample an axis of rotation and an rotation angle in the range ±45 degrees. The AUC of the [5] PoseCNN refined (ours) HeatMaps [26] HeatMaps refined (ours) DeepIM [16] Object ADD ADD-S ADD( ∆) ADD-S( ∆) ADD ADD-S ADD( ∆) ADD-S( ∆) ADD ADD-S We report the area under the accuracy curve (AUC) for varying error thresholds on the ADD and ADD-S metrics. Fig.…”
Section: Methodsmentioning
confidence: 99%
“…In addition to predicting the pose from RGB or RGB-D data, there are several refinement techniques for pose improvement after the initial estimation. Li et al [17] introduce a render-and-compare technique that improves the estimation only using the original RGB input. If depth is available, ICP registration can be used to refine poses.…”
Section: Introductionmentioning
confidence: 99%