During development, infants learn to differentiate their motor behaviors relative to various contexts by exploring and identifying the correct structures of causes and effects that they can perform; these structures of actions are called task sets or internal models. The ability to detect the structure of new actions, to learn them and to select on the fly the proper one given the current task set is one great leap in infants cognition. This behavior is an important component of the child's ability of learning-to-learn, a mechanism akin to the one of intrinsic motivation that is argued to drive cognitive development. Accordingly, we propose to model a dual system based on (1) the learning of new task sets and on (2) their evaluation relative to their uncertainty and prediction error. The architecture is designed as a two-level-based neural system for context-dependent behavior (the first system) and task exploration and exploitation (the second system). In our model, the task sets are learned separately by reinforcement learning in the first network after their evaluation and selection in the second one. We perform two different experimental setups to show the sensorimotor mapping and switching between tasks, a first one in a neural simulation for modeling cognitive tasks and a second one with an arm-robot for motor task learning and switching. We show that the interplay of several intrinsic mechanisms drive the rapid formation of the neural populations with respect to novel task sets.
It is known that during early infancy, humans experience many physical and cognitive changes that shape their learning and refine their understanding of objects in the world. With the extended arm being one of the very first objects they familiarise, infants undergo a series of developmental stages that progressively facilitate physical interactions, enrich sensory information and develop the skills to learn and recognise. Drawing inspiration from infancy, this study deals with the modelling of an open-ended learning mechanism for embodied agents that considers the cumulative and increasing complexity of physical interactions with the world. The proposed system achieves object perception, and recognition as the agent (i.e., a humanoid robot) matures, experiences changes to its visual capabilities, develops sensorimotor control, and interacts with objects within its reach. The reported findings demonstrate the critical role of developing vision on the effectiveness of object learning and recognition and the importance of reaching and grasping in solving visually elicited ambiguities. Impediments caused by the interdependency of parallel components responsible for the agent's physical and cognitive functionalities are exposed, demonstrating an interesting phase transition in utilising object perceptions for recognition.
The so-called self-other correspondence problem in imitation demands to find the transformation that maps the motor dynamics of one partner to our own. This requires a general purpose sensorimotor mechanism that transforms an external fixation-point (partner's shoulder) reference frame to one's own body-centered reference frame. We propose that the mechanism of gain-modulation observed in parietal neurons may generally serve these types of transformations by binding the sensory signals across the modalities with radial basis functions (tensor products) on the one hand and by permitting the learning of contextual reference frames on the other hand. In a shoulder-elbow robotic experiment, gain-field neurons (GF) intertwine the visuo-motor variables so that their amplitude depends on them all. In situations of modification of the body-centered reference frame, the error detected in the visuo-motor mapping can serve then to learn the transformation between the robot's current sensorimotor space and the new one. These situations occur for instance when we turn the head on its axis (visual transformation), when we use a tool (body modification), or when we interact with a partner (embodied simulation). Our results defend the idea that the biologically-inspired mechanism of gain modulation found in parietal neurons can serve as a basic structure for achieving nonlinear mapping in spatial tasks as well as in cooperative and social functions.
This paper proposes a computational model for learning robot control and sequence planning based on the ideomotor principle. This model encodes covariation laws between sensors and motors in a modular fashion and exploits these primitive skills to build complex action sequences, potentially involving tool-use. Implemented for a robotic arm, the model starts with raw unlabelled sensor and motor vectors and autonomously assigns functions to neutral objects in the environment. Our experimental evaluation highlights the emergent properties of such a modular system and we discuss their consequences from ideomotor and sensorimotor-theoretic perspectives.
International audienceWe explore different strategies to overcome the problem of sensorimotor transformation that babies face during development, especially in the case of tool-use. From a developmental perspective, we investigate a model based on absolute coordinate frames of reference, and another one based on relative coordinate frames of reference. In a situation of sensorimotor learning and of adaptation to tool-use, we perform a computer simulation of a 4 degrees of freedom robot. We show that the relative coordinate strategy is the most rapid and robust to re-adapt the neural code
Examining the different stages of learning through play in humans during early life has been a topic of interest for various scholars. Play evolves from practice to symbolic and then later to play with rules. During practice play, infants go through a process of developing knowledge while they interact with the surrounding objects, facilitating the creation of new knowledge about objects and object related behaviors. Such knowledge is used to form schemas in which the manifestation of sensorimotor experiences is captured. Through subsequent play, certain schemas are further combined to generate chains able to achieve behaviors that require multiple steps. The chains of schemas demonstrate the formation of higher level actions in a hierarchical structure. In this work we present a schema-based play generator for artificial agents, termed Dev-PSchema. With the help of experiments in a simulated environment and with the iCub robot, we demonstrate the ability of our system to create schemas of sensorimotor experiences from playful interaction with the environment. We show the creation of schema chains consisting of a sequence of actions that allow an agent to autonomously perform complex tasks. In addition to demonstrating the ability to learn through playful behavior, we demonstrate the capability of Dev-PSchema to simulate different infants with different preferences toward novel vs. familiar objects.
International audienceIn this paper, we propose a bio-inspired and developmental neural model allowing a robot, after learning its own dynamics during a babbling phase, to gain imitative and shape recognition abilities leading to early attempts for physical and social interactions. We use a motor controller based on oscillators. During the babbling step, the robot learn to associate its motor primitives (oscillators) to the visual optical flow induced by its own arm. It also statically learn to recognize its arm by selecting moving local view (feature points) in the visual field. We demonstrate in real indoor experiments that, using this same model, early physical (reaching objects) and social (immediate imitation) interactions can emerge through visual ambiguities induced by the external visual stimuli
Exercising sensorimotor and cognitive functions allows humans, including infants, to interact with the environment and objects within it. In particular, during everyday activities, infants continuously enrich their repertoire of actions, and by playing, they experimentally plan such actions in sequences to achieve desired goals. The latter, reflected as perceptual target states, are built on previously acquired experiences shaped by infants to predict their actions. Imitating this, in developmental robotics, we seek methods that allow autonomous embodied agents with no prior knowledge to acquire information about the environment. Like infants, robots that actively explore the surroundings and manipulate proximate objects are capable of learning. Their understanding of the environment develops through the discovery of actions and their association with the resulting perceptions in the world. We extend the development of Dev-PSchema, a schema-based, open-ended learning system, and examine the infant-like discovery process of new generalised skills while engaging with objects in free-play using an iCub robot. Our experiments demonstrate the capability of Dev-PSchema to utilise the newly discovered skills to solve user-defined goals beyond its past experiences. The robot can generate and evaluate sequences of interdependent high-level actions to form potential solutions and ultimately solve complex problems towards tool-use.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.