2016 IEEE International Conference on Robotics and Automation (ICRA) 2016
DOI: 10.1109/icra.2016.7487781
|View full text |Cite
|
Sign up to set email alerts
|

Fast 6D pose estimation for texture-less objects from a single RGB image

Abstract: A fundamental step to solve bin-picking and grasping problems is the accurate estimation of an object 3D pose. Such visual task usually rely on profusely textured objects: standard procedures such as detection of interest points or computation of appearance-based descriptors are favoured by using a highly informative surface. However, texture-less objects or their parts (i.e., those whose surface texture is poorly conditioned) are common in any environment but still challenging to deal with. This is due the fa… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 37 publications
(17 citation statements)
references
References 23 publications
0
14
0
Order By: Relevance
“…Crivellaro et al [12] supply 3D CAD models and annotated RGB sequences with 3 highly occluded and texture-less objects. Muñoz et al [36] provide RGB sequences of 6 texture-less objects that are each imaged in isolation against a clean background and without occlusion. Further to the above, there exist RGB datasets such as [13,50,38,25], for which the ground truth is provided only in the form of 2D bounding boxes.…”
Section: Depth-only and Rgb-only Datasetsmentioning
confidence: 99%
“…Crivellaro et al [12] supply 3D CAD models and annotated RGB sequences with 3 highly occluded and texture-less objects. Muñoz et al [36] provide RGB sequences of 6 texture-less objects that are each imaged in isolation against a clean background and without occlusion. Further to the above, there exist RGB datasets such as [13,50,38,25], for which the ground truth is provided only in the form of 2D bounding boxes.…”
Section: Depth-only and Rgb-only Datasetsmentioning
confidence: 99%
“…if they point roughly in the same direction. This is similar to the orthogonal line search proposed in [12]. The edge-to-edge association provides multiple P2L tasks per link, which are updated at each iteration by rendering the new estimated state.…”
Section: E Tracking Objectivementioning
confidence: 96%
“…Visual features: Different sparse and dense visual features have been used in tracking literature to establish correspondences between the observed and estimated state of a 3D model. Early work in this area used dense features like colour image edges [11], [1], [12] and depth images [9]. These correspondences are based on the local appearance of the estimated state and change with each iteration of the optimisation.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Feature Extraction Phase. During an off-line feature extraction phase, 3D pose [180], [34], [41], [181] or 6D pose [176], [179], [178], [182], [177], [23], [2], [24], [25] annotated templates involved in the training data are represented with robust feature descriptors. Features are manually-crafted utilizing the available shape, geometry, and appearance information [176], [179], [178], [182], [177], [23], [2], [25], and the recent paradigm in the field is to deep learn those using neural net architectures [180], [34], [41], [181].…”
Section: Template Matchingmentioning
confidence: 99%