2012 12th IEEE-RAS International Conference on Humanoid Robots (Humanoids 2012) 2012
DOI: 10.1109/humanoids.2012.6651520
|View full text |Cite
|
Sign up to set email alerts
|

Real-time 3D segmentation of cluttered scenes for robot grasping

Abstract: Abstract-We present a real-time algorithm that segments unstructured and highly cluttered scenes. The algorithm robustly separates objects of unknown shape in congested scenes of stacked and partially occluded objects. The model-free approach finds smooth surface patches, using a depth image from a Kinect camera, which are subsequently combined to form highly probable object hypotheses. The real-time capabilities and the quality of the algorithm are evaluated on a benchmark database. Advantages compared to exi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
31
0

Year Published

2014
2014
2017
2017

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 38 publications
(32 citation statements)
references
References 17 publications
1
31
0
Order By: Relevance
“…In addition to these qualitative results, we also provide quantitative results which compare to state-of-the-art methods on the NYU Indoor Dataset [22] and Object Segmentation Database [20]. We compare segments against ground truth using three standard measures: Weighted Overlap (WOv), which is a summary measure proposed by Silberman et al [22], as well as false negative (f n) and false positive (f p) scores from [24] and over-(F os ) and under-segmentation (F us ) from [20].…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…In addition to these qualitative results, we also provide quantitative results which compare to state-of-the-art methods on the NYU Indoor Dataset [22] and Object Segmentation Database [20]. We compare segments against ground truth using three standard measures: Weighted Overlap (WOv), which is a summary measure proposed by Silberman et al [22], as well as false negative (f n) and false positive (f p) scores from [24] and over-(F os ) and under-segmentation (F us ) from [20].…”
Section: Discussionmentioning
confidence: 99%
“…This is also true for the boundary between an object and the supporting surface. As a consequence, objects that have [20] RGB-D + Texture + Geometry -----4.5% 7.9% Uckermann et al [24] NO LEARNING -1.9% 3.3% 7.8% 7.3% -- Table 1. Comparison of different segmentation methods on the OSD dataset using weighted overlap WOv (the higher, the better), false positives fp, false negatives fn, as well as over-and under-segmentation Fos and Fus (the lower, the better).…”
Section: Object Segmentation Database (Osd)mentioning
confidence: 99%
See 1 more Smart Citation
“…This information is utilized on the one hand to chose the grasp prototype and on the other hand to setup an appropriate approaching controller, utilizing the symmetries inherent to all recognized object shapes. A video illustrating the segmentation capabilities and the achieved grasping skills is available at youtube [22].…”
Section: Vision-based Grasp Selectionmentioning
confidence: 99%
“…To chose an appropriate grasp for a given object, we employ a real-time, model-free scene segmentation method [21], which yields individual point clouds for all objects within the scene. Into each point cloud, a superquadrics model is fitted that captures the coarse shape of the object, smoothly varying between sphere, ellipsoid, cylinder, and box [22]. This model provides an estimation of the position and orientation as well as the coarse size and shape of the object.…”
Section: Vision-based Grasp Selectionmentioning
confidence: 99%