2014
DOI: 10.1108/ir-07-2014-0363
|View full text |Cite
|
Sign up to set email alerts
|

Solving peg-in-hole tasks by human demonstration and exception strategies

Abstract: If you would like to write for this, or any other Emerald publication, then please use our Emerald for Authors service information about how to choose which publication to write for and submission guidelines are available for all. Please visit www.emeraldinsight.com/authors for more information. About Emerald www.emeraldinsight.comEmerald is a global publisher linking research and practice to the benefit of society. The company manages a portfolio of more than 290 journals and over 2,350 books and book series … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
48
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 59 publications
(48 citation statements)
references
References 23 publications
0
48
0
Order By: Relevance
“…The reward functions include the terms of the force, moment, z-axis robot position, xyz robot rotation, and time step. Meta-parameters α, β, and γ control the weights of the terms individually because they deal with different information types (see Equations (8) and (9)). We can change the importance of each term in the reward function by adjusting these meta-parameters.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…The reward functions include the terms of the force, moment, z-axis robot position, xyz robot rotation, and time step. Meta-parameters α, β, and γ control the weights of the terms individually because they deal with different information types (see Equations (8) and (9)). We can change the importance of each term in the reward function by adjusting these meta-parameters.…”
Section: Discussionmentioning
confidence: 99%
“…We attempt to obtain these optimal solutions with fewer trials-and-errors by providing near-optimal solutions learned from human demonstrations. In this paper, robots improve motor skills to optimize them-referred to as improvement-and generalize them so that they are widely applicable-known as generalization-through self-learning.The peg-in-hole task has also been addressed through several imitation learning studies [8,9]. However, the peg-in-hole task is not easy to learn with this method alone.…”
mentioning
confidence: 99%
See 1 more Smart Citation
“…Inverse reinforcement learning (IRL), on the other hand, estimates a reward function based on the expert's demonstration [17,18], then utilizes the learned reward function to obtain the policy. Existing demonstration methods include visual demonstration [19][20][21][22][23], force demonstration [24][25][26], visual and force demonstration [27,28], and trajectory demonstration [29][30][31][32][33][34][35].…”
Section: Robot Learning From Demonstrationmentioning
confidence: 99%
“…Stenmetz et al use an external force/torque sensor mounted on the robot arm to record force and position simultaneously and learn the force profiles for a PID-controller. Abu-Dakka et al redeploy pegin-hole skills in new settings by adding exception strategies that randomly search for the hole to improve robustness [4]. Nemec et al use a priori knowledge and reinforcement learning to reduce the number of demonstrations needed to teach a flip task to a robot [24] and Pastor et al use reinforcement learning to optimize a pool strike [27].…”
Section: Related Workmentioning
confidence: 99%