Abstract:Abstract-This paper proposes a probabilistic framework based on movement primitives for robots that work in collaboration with a human coworker. Since the human coworker can execute a variety of unforeseen tasks a requirement of our system is that the robot assistant must be able to adapt and learn new skills on-demand, without the need of an expert programmer. Thus, this paper leverages on the framework of imitation learning and its application to human-robot interaction using the concept of Interaction Primi… Show more
“…ProMPs use the concept of phases in the same manner, with the difference that the basis functions are used to encode positions. This difference is fundamental for Interaction Primitives since estimating the forcing function of the human is nontrivial in practice, while positions can be often measured directly [14].…”
Section: Related Workmentioning
confidence: 99%
“…It leverages on the representation of movements with ProMPs, our developments into the context of human-robot interaction [1,14], and the ability to address multiple tasks [14,8]. While our previous interaction models were explicitly time-dependent, here, we introduce a phase-dependent method.…”
This paper proposes an interaction learning method suited for semi-autonomous robots that work with or assist a human partner. The method aims at generating a collaborative trajectory of the robot as a function of the current action of the human. The trajectory generation is based on action recognition and prediction of the human movement given intermittent observations of his/her positions under unknown speeds of execution; a problem typically found when using motion capture systems in scenarios that lead to occlusion. Of particular interest, the ability to predict the human movement while observing the initial part of his/her trajectory allows for faster robot reactions, and as it will be shown, also eliminates the need of time-alignment of the training data. The method models the coupling between human-robot movement primitives and is scalable in relation to the number of tasks. We evaluated the method using a 7-DoF lightweight robot arm equipped with a 5-finger hand in a multi-task collaborative assembly experiment, also comparing results with our previous method based on timealigned trajectories.
“…ProMPs use the concept of phases in the same manner, with the difference that the basis functions are used to encode positions. This difference is fundamental for Interaction Primitives since estimating the forcing function of the human is nontrivial in practice, while positions can be often measured directly [14].…”
Section: Related Workmentioning
confidence: 99%
“…It leverages on the representation of movements with ProMPs, our developments into the context of human-robot interaction [1,14], and the ability to address multiple tasks [14,8]. While our previous interaction models were explicitly time-dependent, here, we introduce a phase-dependent method.…”
This paper proposes an interaction learning method suited for semi-autonomous robots that work with or assist a human partner. The method aims at generating a collaborative trajectory of the robot as a function of the current action of the human. The trajectory generation is based on action recognition and prediction of the human movement given intermittent observations of his/her positions under unknown speeds of execution; a problem typically found when using motion capture systems in scenarios that lead to occlusion. Of particular interest, the ability to predict the human movement while observing the initial part of his/her trajectory allows for faster robot reactions, and as it will be shown, also eliminates the need of time-alignment of the training data. The method models the coupling between human-robot movement primitives and is scalable in relation to the number of tasks. We evaluated the method using a 7-DoF lightweight robot arm equipped with a 5-finger hand in a multi-task collaborative assembly experiment, also comparing results with our previous method based on timealigned trajectories.
“…Therefore, even if several trajectories of different components largely overlap in space, it is still possible to identify the correct component with high certainty as the order at which those measurements are made are also taken into account. (The interested reader also is referred to (Maeda et al, 2014), where action recognition experiments were conducted in more detail).…”
Section: Inference Of the Assistant's Trajectorymentioning
confidence: 99%
“…In the context of movement primitives, this clock is often referred to as the phase variable. In this paper, all human and robot trajectories collected during the experiments presented in Section 4 were aligned by using the method briefly presented in (Maeda et al, 2014) and will be described in detail here.…”
Section: Appendix: Time-alignment Of Multiple Demonstrationsmentioning
confidence: 99%
“…This paper consolidates the theoretical framework of a mixture of Interaction ProMPs and validates the method in assistive and collaborative tasks. In less detail, parts of this paper have previously appeared in conference proceedings (Maeda et al, 2014;Ewerton et al, n.d.) where preliminary versions of our algorithm have been described. The remainder of this paper is organized as follows, Section 2 describes related work, Section 3 describes the proposed method and compares our method with the previous framework of interaction primitives based on Dynamical Movement Primitives (DMPs).…”
This paper proposes an interaction learning method for collaborative and assistive robots based on movement primitives. The method allows for both action recognition and human-robot movement coordination. It uses imitation learning to construct a mixture model of human-robot interaction primitives. This probabilistic model allows the assistive trajectory of the robot to be inferred from human observations. The method is scalable in relation to the number of tasks and can learn nonlinear correlations between the trajectories that describe the human-robot interaction. We evaluated the method experimentally with a lightweight robot arm in a variety of assistive scenarios, including the coordinated handover of a bottle to a human, and the collaborative assembly of a toolbox. Potential applications of the method are personal caregiver robots, control of intelligent prosthetic devices, and robot coworkers in factories.Keywords Movement primitives · physical humanrobot interaction · imitation learning · mixture model · action recognition · trajectory generation
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.