Abstract-In this work we propose an approach for learning task specifications automatically, by observing human demonstrations. Using this allows a robot to combine representations of individual actions to achieve a high-level goal. We hypothesize that task specifications consist of variables that present a pattern of change that is invariant across demonstrations. We identify these specifications at different stages of task completion. Changes in task constraints allow us to identify transitions in the task description and to segment them into sub-tasks. We extract the following task-space constraints: (1) the reference frame in which to express the task variables, (2) the variable of interest at each time step, position or force at the end effector; and (3) a factor that can modulate the contribution of force and position in a hybrid impedance controller. The approach was validated on a 7 DOF Kuka arm, performing 2 different tasks: grating vegetables and extracting a battery from a charging stand.
This paper introduces a hierarchical framework that is capable of learning complex sequential tasks from human demonstrations through kinesthetic teaching, with minimal human intervention. Via an automatic task segmentation and action primitive discovery algorithm, we are able to learn both the high-level task decomposition (into action primitives), as well as low-level motion parameterizations for each action, in a fully integrated framework. In order to reach the desired task goal, we encode a task metric based on the evolution of the manipulated object during demonstration, and use it to sequence and parametrize each action primitive. We illustrate this framework with a pizza dough rolling task and show how the learned hierarchical knowledge is directly used for autonomous robot execution.
In robot Programming by Demonstration (PbD), the interaction with the human user is key to collecting good demonstrations, learning and finally achieving a good task execution. We therefore take a dual approach in analyzing demonstration data. First we automatically determine task constraints that can be used in the learning phase. Specifically we determine the frame of reference to be used in each part of the task, the important variables for each axis and a stiffness modulation factor. Additionally for bi-manual tasks we determine arm-dominance and spatial or force coordination patterns between the arms. Second we analyze human behavior during demonstration in order to determine how skilled the human user is and what kind of feedback is preferred during the learning interaction. We test this approach on complex tasks requiring sequences of actions, bi-manual or arm-hand coordination and contact on each end effector.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.