2022
DOI: 10.1109/lra.2022.3188437
|View full text |Cite
|
Sign up to set email alerts
|

Learning Push-Grasping in Dense Clutter

Abstract: Robotic grasping in highly cluttered environments remains a challenging task due to the lack of collision free grasp affordances. In such conditions, non-prehensile actions could help to increase such affordances. We propose a multi-fingered push-grasping policy that creates enough space for the fingers to wrap around an object to perform a stable power grasp, using a single primitive action. Our approach learns a direct mapping from visual observations to actions and is trained in a fully end-to-end manner. T… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 10 publications
(5 citation statements)
references
References 31 publications
0
5
0
Order By: Relevance
“…In [14], the RL was based on a hybrid discrete-continuous adaptation of soft actor-critic (SAC) for learning continuous and high-dimensional PG policies in robotic bin-picking. Different from one single network for pushing and grasping, Kiatos, et al [15] adopted two different networks, one for position and angle of the gripper, and the other one for opening size of the gripper. With two separate networks, Li, et al [16] proposed a position net for pushing and grasping positions, and an angle net for grasping angles.…”
Section: Deep-learning Modelsmentioning
confidence: 99%
“…In [14], the RL was based on a hybrid discrete-continuous adaptation of soft actor-critic (SAC) for learning continuous and high-dimensional PG policies in robotic bin-picking. Different from one single network for pushing and grasping, Kiatos, et al [15] adopted two different networks, one for position and angle of the gripper, and the other one for opening size of the gripper. With two separate networks, Li, et al [16] proposed a position net for pushing and grasping positions, and an angle net for grasping angles.…”
Section: Deep-learning Modelsmentioning
confidence: 99%
“…However, only depth image was considered in their work, and so the test results for novel unknown objects was not perfect. More recent research makes it possible to train robot to learn synergies between pushing and grasping in dense clutter [16][17][18][19]. These methods utilize visual observations for end-to-end decision-making without using object-specific knowledge.…”
Section: Related Workmentioning
confidence: 99%
“…Kalashnikov et al [20] introduce a scalable vision-based reinforcement learning framework named QT-Opt, which enables robots to learn how to pick up objects and execute nonprehensile pre-grasp actions. Kiatos et al [18] designed an experiment to learn a direct correlation between visual observations and actions, and it is trained in a comprehensive end-to-end manner. Without assuming a segmentation of the scene, the grasping policy accomplishes robust power grasps in cluttered environments.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Recently, a multi-fingered push-grasping policy was proposed using deep Q-learning that creates enough space for the fingers to wrap around an object to perform a stable power grasp using a single primitive action [ 74 ]. A target-oriented robotic push-grasping system was proposed that is able to actively discover and pick up the impurities in dense environments with the synergies between pushing and grasping actions.…”
Section: Critical Reviewmentioning
confidence: 99%