2018 IEEE International Conference on Robotics and Biomimetics (ROBIO) 2018
DOI: 10.1109/robio.2018.8664766
|View full text |Cite
|
Sign up to set email alerts
|

Bin Picking of Reflective Steel Parts Using a Dual-Resolution Convolutional Neural Network Trained in a Simulated Environment

Abstract: We consider the case of robotic bin picking of reflective steel parts, using a structured light 3D camera as a depth imaging device. In this paper, we present a new method for bin picking, based on a dual-resolution convolutional neural network trained entirely in a simulated environment. The dualresolution network consists of a high resolution focus network to compute the grasp and a low resolution context network to avoid local collisions.The reflectivity of the steel parts result in depth images that have a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 25 publications
0
3
0
Order By: Relevance
“…Free from the limitation of manually extracting features, the grasping algorithms based on deep learning have achieved insurmountable effects in all aspects by traditional approaches, taking the robot's intelligence to a higher level. Specifically, with RGB images or depth images as input, robotic grasping based on convolutional neural network (CNN) which is a dominant deep learning framework in the field of computer vision, has obtained high grasping success rates in many tasks (Lenz et al, 2015 ; Varley et al, 2015 ; Johns et al, 2016 ; Finn and Levine, 2017 ; James et al, 2017 ; Kumra and Kanan, 2017 ; Zhang et al, 2017 ; Dyrstad et al, 2018 ; Levine et al, 2018 ; Schmidt et al, 2018 ; Schwarz et al, 2018 ). As shown in Figure 1 , nowadays, based on visual information, robot dexterous grasp learning can be roughly divided into two categories based on whether the learning process is based on trial and error.…”
Section: Introductionmentioning
confidence: 99%
“…Free from the limitation of manually extracting features, the grasping algorithms based on deep learning have achieved insurmountable effects in all aspects by traditional approaches, taking the robot's intelligence to a higher level. Specifically, with RGB images or depth images as input, robotic grasping based on convolutional neural network (CNN) which is a dominant deep learning framework in the field of computer vision, has obtained high grasping success rates in many tasks (Lenz et al, 2015 ; Varley et al, 2015 ; Johns et al, 2016 ; Finn and Levine, 2017 ; James et al, 2017 ; Kumra and Kanan, 2017 ; Zhang et al, 2017 ; Dyrstad et al, 2018 ; Levine et al, 2018 ; Schmidt et al, 2018 ; Schwarz et al, 2018 ). As shown in Figure 1 , nowadays, based on visual information, robot dexterous grasp learning can be roughly divided into two categories based on whether the learning process is based on trial and error.…”
Section: Introductionmentioning
confidence: 99%
“…When the sensor is attached to the robotic arm performing the grasping, additional constraints on how the manipulator can move whilst avoiding self-collisions and collisions with the bin or other parts of the environment is imposed. A characteristic of the grasps supplied by the network [8], is that they are decoupled from the robot tasked with reaching them. The network has no knowledge about the existence, and kinematics, of the robot.…”
Section: Problem Formulationmentioning
confidence: 99%
“…: [5], [6] and [7]) is a popular choice to generate grasps for picking. The grasps chosen for picking in this set-up are supplied by a dual-resolution convolutional neural network trained on simulated data [8]. The input to the network is a point cloud of the current distribution of parts in the bin, and the output is multiple grasp pairs, {d i , v i }, where i ∈ {1, .…”
Section: Introductionmentioning
confidence: 99%