Our system is currently under heavy load due to increased usage. We're actively working on upgrades to improve performance. Thank you for your patience.
2018
DOI: 10.1109/tmech.2017.2717461
|View full text |Cite
|
Sign up to set email alerts
|

Reinforcement Learning of Manipulation and Grasping Using Dynamical Movement Primitives for a Humanoidlike Mobile Manipulator

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
64
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
8
2

Relationship

1
9

Authors

Journals

citations
Cited by 164 publications
(70 citation statements)
references
References 33 publications
0
64
0
Order By: Relevance
“…One of the most significant benefits of the bio-inspired human-to-robot impedance feature transfer is that adaptive impedance control for robotic arms can be realized, which has demonstrated better performance than position control or invariant impedance control for in-contact tasks by a number of works (e.g., [4,14,15,20,21]). In [22], a learning framework was established for achieving variable impedance control for robots. However, the variable impedance profiles are obtained via a time-consuming process which may limit the framework's functionality available to real world applications.…”
Section: Introductionmentioning
confidence: 99%
“…One of the most significant benefits of the bio-inspired human-to-robot impedance feature transfer is that adaptive impedance control for robotic arms can be realized, which has demonstrated better performance than position control or invariant impedance control for in-contact tasks by a number of works (e.g., [4,14,15,20,21]). In [22], a learning framework was established for achieving variable impedance control for robots. However, the variable impedance profiles are obtained via a time-consuming process which may limit the framework's functionality available to real world applications.…”
Section: Introductionmentioning
confidence: 99%
“…The test shows that the robot can learn how to open and drive through a door. In Reference [25], the authors use reinforcement learning strategy for a humanoid-like mobile manipulator. The strategy includes a high-level online redundancy resolution based on the neural-dynamic optimization algorithm in operational space and a low-level RL in joint space based on the dynamic movement primitives.…”
Section: Reinforcement Learning For Manipulationmentioning
confidence: 99%
“…In order to address the above questions, much recent work has focused on dynamic movement primitives (DMPs) [17][18][19][20][21][22], which offer a simple and versatile framework to represent and generate related movements. The core of DMPs is learning from demonstration (LfD).…”
Section: Introductionmentioning
confidence: 99%