2007
DOI: 10.1109/tsmcb.2006.886951
|View full text |Cite
|
Sign up to set email alerts
|

Incremental Learning of Tasks From User Demonstrations, Past Experiences, and Vocal Comments

Abstract: Since many years the robotics community is envisioning robot assistants sharing the same environment with humans. It became obvious that they have to interact with humans and should adapt to individual user needs. Especially the high variety of tasks robot assistants will be facing requires a highly adaptive and user-friendly programming interface. One possible solution to this programming problem is the learning-by-demonstration paradigm, where the robot is supposed to observe the execution of a task, acquire… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
71
0
1

Year Published

2008
2008
2016
2016

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 115 publications
(72 citation statements)
references
References 13 publications
0
71
0
1
Order By: Relevance
“…Programming mobile robots by demonstration is now a major trend in the robotics community [Pardowitz et al, 2007,Allissandrakis et al, 2005. Many researchers demonstrate the viability of this approach in tasks such as maze navigation [Hayes and Demiris, 1994] [ Demiris and Hayes, 1996], arm movement [Schaal, 1997] [ Calinon and Billard, 2007] or service robotics [Demiris and Johnson, 2003].…”
Section: Related Workmentioning
confidence: 99%
“…Programming mobile robots by demonstration is now a major trend in the robotics community [Pardowitz et al, 2007,Allissandrakis et al, 2005. Many researchers demonstrate the viability of this approach in tasks such as maze navigation [Hayes and Demiris, 1994] [ Demiris and Hayes, 1996], arm movement [Schaal, 1997] [ Calinon and Billard, 2007] or service robotics [Demiris and Johnson, 2003].…”
Section: Related Workmentioning
confidence: 99%
“…Dautenhahn and Nehaniv (2002) proposed an approach for the robot to learn from human demonstration by imitation, referred to as the correspondence problem, and later the team developed a system that can learn two-dimensional (2D) arranging tasks (Alissandrakis et al 2005a,b). Pardowitz et al (2006) proposed a hierarchical structure for the robot to deal with complex tasks while the motion order can be changed, and later they went on analyzing human motion features for high-level tasks (Pardowitz et al 2007). With both symbolic and trajectory levels of skill representation, Ogawara et al (2003) proposed a method that determines the essential motions from the possible motions.…”
Section: Introductionmentioning
confidence: 99%
“…In general, one is interested in autonomous identification of action primitives in the context of imitation learning and human-machine interaction (Sanmohan, Krüger, & Kragic, 2010;Takano & Nakamura, 2006). Within this domain, Matsuo et al focused on force feedback (Matsuo, Murakami, Hasegawa, Tahara, & Ryo, 2009) while a combination of different sensors like CyberGlove, Vicon or magnetic markers and tactile sensors has been used by (Pardowitz, Knoop, Dillmann, & Zöllner, 2007), (Kawasaki et al, 2000) and (Li, Kulkarni, & Prabhakaran, 2006). In (Zöllner, Asfour, & Dillmann, 2004) a bimanual approach is described.…”
Section: Introductionmentioning
confidence: 99%
“…Our method does not employ any specific knowledge about the components of the action sequence. Based on two simple models, the modeling does not require a large set of domain-specific heuristics describing each action primitive as is commonly the case in similar approaches (Pardowitz et al, 2007;Kawasaki et al, 2000;). Due to the simplicity of these two fundamental models and the modeling concepts used within our approach, the developed procedure can be easily used in a wide range of scenarios, like imitation learning, cooperation and assistance.…”
Section: Introductionmentioning
confidence: 99%