Mimicry and deferred imitation have often been considered as separate kinds of imitation. In this paper, we present a simple architecture for robotic arm control which can be used for both. The model is based on some dynamical equations [1], which provide a motor control with exploring and converging capacities. A visuo-motor map [2] is used to associate positions of the end effector in the visual space with proprioceptive position of the robotic arm. It enables a fast learning of the visuo-motor associations without needing to embed a priori information. The controller can be used both for accurate control and interaction. It has been implemented on a minimal robotic setup showing some interesting emergent properties. The robot can reproduce simple gestures in mimicry situation and finalized actions in deferred imitation situation. Moreover, it can even show some "intention" recognition abilities. Finally, the experiment of deferred imitation which inherits from learning by demonstration, also provides a good basis for cooperative and interactive experiments.
Abstract-When a robot is brought into a new environment, it has a very limited knowledge of what surrounds it and what it can do. One way to build up that knowledge is through exploration but it is a slow process. Programming by demonstration is an efficient way to learn new things from interaction. A robot can imitate gestures it was shown through passive manipulation. Depending on the representation of the task, the robot may also be able to plan its actions and even adapt its representation when further interactions change its knowledge about the task to be done. In this paper we present a bio-inspired neural network used in a robot to learn arm gestures demonstrated through passive manipulation. It also allows the robot to plan arm movements according to activated goals. The model is applied to learning a pick-and-place task. The robot learns how to pick up objects at a specific location and drop them in two different boxes depending on their color. As our system is continuously learning, the behavior of the robot can always be adapted by the human interacting with it. This ability is demonstrated by teaching the robot to switch the goals for both types of objects.
Imitation and learning from humans require an adequate sensorimotor controller to learn and encode behaviors. We present the Dynamic Muscle Perception-Action(DM-PerAc) model to control a multiple degrees-of-freedom (DOF) robot arm. In the original PerAc model, path-following or place-reaching behaviors correspond to the sensorimotor attractors resulting from the dynamics of learned sensorimotor associations. The DM-PerAc model, inspired by human muscles, permits one to combine impedance-like control with the capability of learning sensorimotor attraction basins. We detail a solution to learn incrementally online the DM-PerAc visuomotor controller. Postural attractors are learned by adapting the muscle activations in the model depending on movement errors. Visuomotor categories merging visual and proprioceptive signals are associated with these muscle activations. Thus, the visual and proprioceptive signals activate the motor action generating an attractor which satisfies both visual and proprioceptive constraints. This visuomotor controller can serve as a basis for imitative behaviors. In addition, the muscle activation patterns can define directions of movement instead of postural attractors. Such patterns can be used in state-action couples to generate trajectories like in the PerAc model. We discuss a possible extension of the DM-PerAc controller by adapting the Fukuyori's controller based on the Langevin's equation. This controller can serve not only to reach attractors which were not explicitly learned, but also to learn the state/action couples to define trajectories.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.