2017
DOI: 10.1177/0278364917713117
|View full text |Cite
|
Sign up to set email alerts
|

RGB-D object detection and semantic segmentation for autonomous manipulation in clutter

Abstract: Autonomous robotic manipulation in clutter is challenging. A large variety of objects must be perceived in complex scenes, where they are partially occluded and embedded among many distractors, often in restricted spaces. To tackle these challenges, we developed a deep-learning approach that combines object detection and semantic segmentation. The manipulation scenes are captured with RGB-D cameras, for which we developed a depth fusion method. Employing pretrained features makes learning from small annotated … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
67
0
1

Year Published

2018
2018
2022
2022

Publication Types

Select...
4
2
2

Relationship

2
6

Authors

Journals

citations
Cited by 144 publications
(68 citation statements)
references
References 57 publications
0
67
0
1
Order By: Relevance
“…Especially the usage of fully convolutional networks seems promising [13], [14] which is why our framework adapts the same general structure. Some segmentation methods are also specifically configured for bin picking tasks like [15], [16], [17]. Other than these methods, our framework considers only object-background segmentation as part of semantic segmentation, i.e., only two classes should be distinguished, namely important objects to interact with in the scene and background.…”
Section: Related Workmentioning
confidence: 99%
“…Especially the usage of fully convolutional networks seems promising [13], [14] which is why our framework adapts the same general structure. Some segmentation methods are also specifically configured for bin picking tasks like [15], [16], [17]. Other than these methods, our framework considers only object-background segmentation as part of semantic segmentation, i.e., only two classes should be distinguished, namely important objects to interact with in the scene and background.…”
Section: Related Workmentioning
confidence: 99%
“…Nowadays, LiDAR-based system are becoming more prevalent in the autonomous driving industry. RGB-D-based object detection has been explored in [9,10]. However, to the best of our knowledge, this is the first work that utilizes depth obtained from LiDAR to reduce the search space of a detection algorithm.…”
Section: Related Workmentioning
confidence: 99%
“…Our own entry for the Amazon Picking Challenge 2016 [3], [4] placed second in the stow competition and third in the pick competition. It used a single UR10 arm, could only use suction for manipulation, and required manual annotation of entire tote or shelf scenes for training the object perception pipeline.…”
Section: Related Workmentioning
confidence: 99%
“…We add Gaussian noise on translation (σ = 1.5 cm) and rotation (σ = 60 • ), in order to obtain different grasp poses for each manipulation attempt. 4 https://github.com/mapbox/polylabel…”
Section: Heuristic Grasp Selectionmentioning
confidence: 99%