2018 IEEE International Conference on Robotics and Automation (ICRA) 2018
DOI: 10.1109/icra.2018.8462863
|View full text |Cite
|
Sign up to set email alerts
|

Rearrangement with Nonprehensile Manipulation Using Deep Reinforcement Learning

Abstract: Rearranging objects on a tabletop surface by means of nonprehensile manipulation is a task which requires skillful interaction with the physical world. Usually, this is achieved by precisely modeling physical properties of the objects, robot, and the environment for explicit planning. In contrast, as explicitly modeling the physical environment is not always feasible and involves various uncertainties, we learn a nonprehensile rearrangement strategy with deep reinforcement learning based on only visual feedbac… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
24
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 48 publications
(24 citation statements)
references
References 31 publications
(51 reference statements)
0
24
0
Order By: Relevance
“…In terms of learning the nonprehensile rearrangement task in clutter, many feasible frameworks have been presented. For example, Yuan et al [137] learned nonprehensile rearrangement strategy based on Deep Q-network algorithm by exploiting heuristic exploration strategy for reducing the amount of collisions. Whereas, Monte-Carlo Tree Search exploration strategy, which relies on visual inputs coming from an RGB camera, has been presented to learn nonprehensile rearrangement task [138] [118].…”
Section: Discussionmentioning
confidence: 99%
“…In terms of learning the nonprehensile rearrangement task in clutter, many feasible frameworks have been presented. For example, Yuan et al [137] learned nonprehensile rearrangement strategy based on Deep Q-network algorithm by exploiting heuristic exploration strategy for reducing the amount of collisions. Whereas, Monte-Carlo Tree Search exploration strategy, which relies on visual inputs coming from an RGB camera, has been presented to learn nonprehensile rearrangement task [138] [118].…”
Section: Discussionmentioning
confidence: 99%
“…Li et al (2018) used a deep recurrent neural network model to learn object motion properties for planar pushing for a single object. Yuan et al (2018) designed a learning system that treats perception, action planning, and motion planning in an end-to-end process. Meriçli et al (2015) learned case-based planar motion of objects on the plane which is object-specific.…”
Section: Related Workmentioning
confidence: 99%
“…Since deep reinforcement learning has shown success in complex artificial domains [25,26], controlling robots with reinforcement learning has become increasingly interesting [27]. For instance, it has been used to learn grasping [28,29] or manipulation in dynamic environments [5,30]. While these works exploit the advantages of deep models for visual input, this makes it difficult for them to generalize to different conditions.…”
Section: Related Workmentioning
confidence: 99%
“…Robotic manipulation is a complex problem that is often approached by grasping [1,2] or non-prehensile pushing [3][4][5]. However, when heavy or bulky objects need to be manipulated whole arm manipulation (WAM) is usually much more suitable [6][7][8][9].…”
Section: Introductionmentioning
confidence: 99%