Abstract:Abstract-Robots that interact with humans must learn to not only adapt to different human partners but also to new interactions. Such a form of learning can be achieved by demonstrations and imitation. A recently introduced method to learn interactions from demonstrations is the framework of Interaction Primitives. While this framework is limited to represent and generalize a single interaction pattern, in practice, interactions between a human and a robot can consist of many different patterns. To overcome th… Show more
“…1(a), we applied our method on a multi-task scenario where the robot plays the role of a coworker that helps a human assembling a toolbox. This scenario was previously proposed in [8] where time-alignment was used on the training data. While in our previous work, conditioning could only be computed at the end of the movement, here, the robot can predict the collaborative trajectory before the human finishes moving, leading to a faster robot response.…”
Section: A Multi-task Semi-autonomous Robot Coworkermentioning
confidence: 99%
“…Previous works [14,8] have only addressed spatial variability, but not temporal variability of movements. However, when demonstrating the same task multiple times, a human demonstrator will inevitably execute movements at different speeds, thus changing the phase at which events occur.…”
Section: Estimating Phases and Actions Of Multiple Tasksmentioning
confidence: 99%
“…As a consequence, during execution, the conditioning (6) can only be used when the phase of the human demonstrator coincides with the phase encoded by the time-aligned model, which is unrealistic in practice. In [14,8] we avoided this problem by conditioning only on the last position of the human movement, since for this particular case, the corresponding basis function is known to be ψ T . For any other time step t, the association between y * t and the basis ψ t is unknown given that the human presents temporal variability and that the velocity is either unobserved or computation from derivatives impractical due to sparsity of position measurements 2 .…”
Section: Estimating Phases and Actions Of Multiple Tasksmentioning
confidence: 99%
“…First, the training data must be time-aligned, for example by DTW; second, only one type of interaction pattern-or collaborative task-can be encoded within a single Gaussian (mixture of models were used to address the latter problem in an unsupervised fashion [8]). …”
Section: Probabilistic Movement Primitives On a Single Degree-of-freedommentioning
confidence: 99%
“…It leverages on the representation of movements with ProMPs, our developments into the context of human-robot interaction [1,14], and the ability to address multiple tasks [14,8]. While our previous interaction models were explicitly time-dependent, here, we introduce a phase-dependent method.…”
This paper proposes an interaction learning method suited for semi-autonomous robots that work with or assist a human partner. The method aims at generating a collaborative trajectory of the robot as a function of the current action of the human. The trajectory generation is based on action recognition and prediction of the human movement given intermittent observations of his/her positions under unknown speeds of execution; a problem typically found when using motion capture systems in scenarios that lead to occlusion. Of particular interest, the ability to predict the human movement while observing the initial part of his/her trajectory allows for faster robot reactions, and as it will be shown, also eliminates the need of time-alignment of the training data. The method models the coupling between human-robot movement primitives and is scalable in relation to the number of tasks. We evaluated the method using a 7-DoF lightweight robot arm equipped with a 5-finger hand in a multi-task collaborative assembly experiment, also comparing results with our previous method based on timealigned trajectories.
“…1(a), we applied our method on a multi-task scenario where the robot plays the role of a coworker that helps a human assembling a toolbox. This scenario was previously proposed in [8] where time-alignment was used on the training data. While in our previous work, conditioning could only be computed at the end of the movement, here, the robot can predict the collaborative trajectory before the human finishes moving, leading to a faster robot response.…”
Section: A Multi-task Semi-autonomous Robot Coworkermentioning
confidence: 99%
“…Previous works [14,8] have only addressed spatial variability, but not temporal variability of movements. However, when demonstrating the same task multiple times, a human demonstrator will inevitably execute movements at different speeds, thus changing the phase at which events occur.…”
Section: Estimating Phases and Actions Of Multiple Tasksmentioning
confidence: 99%
“…As a consequence, during execution, the conditioning (6) can only be used when the phase of the human demonstrator coincides with the phase encoded by the time-aligned model, which is unrealistic in practice. In [14,8] we avoided this problem by conditioning only on the last position of the human movement, since for this particular case, the corresponding basis function is known to be ψ T . For any other time step t, the association between y * t and the basis ψ t is unknown given that the human presents temporal variability and that the velocity is either unobserved or computation from derivatives impractical due to sparsity of position measurements 2 .…”
Section: Estimating Phases and Actions Of Multiple Tasksmentioning
confidence: 99%
“…First, the training data must be time-aligned, for example by DTW; second, only one type of interaction pattern-or collaborative task-can be encoded within a single Gaussian (mixture of models were used to address the latter problem in an unsupervised fashion [8]). …”
Section: Probabilistic Movement Primitives On a Single Degree-of-freedommentioning
confidence: 99%
“…It leverages on the representation of movements with ProMPs, our developments into the context of human-robot interaction [1,14], and the ability to address multiple tasks [14,8]. While our previous interaction models were explicitly time-dependent, here, we introduce a phase-dependent method.…”
This paper proposes an interaction learning method suited for semi-autonomous robots that work with or assist a human partner. The method aims at generating a collaborative trajectory of the robot as a function of the current action of the human. The trajectory generation is based on action recognition and prediction of the human movement given intermittent observations of his/her positions under unknown speeds of execution; a problem typically found when using motion capture systems in scenarios that lead to occlusion. Of particular interest, the ability to predict the human movement while observing the initial part of his/her trajectory allows for faster robot reactions, and as it will be shown, also eliminates the need of time-alignment of the training data. The method models the coupling between human-robot movement primitives and is scalable in relation to the number of tasks. We evaluated the method using a 7-DoF lightweight robot arm equipped with a 5-finger hand in a multi-task collaborative assembly experiment, also comparing results with our previous method based on timealigned trajectories.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.