The sense of touch is considered as an essential feature for robots in order to improve the quality of their physical and social interactions. For instance, tactile devices have to be fast enough to interact in real-time, robust against noise to process rough sensory information as well as adaptive to represent the structure and topography of a tactile sensor itself-i.e., the shape of the sensor surface and its dynamic resolution. In this paper, we conduct experiments with a self-organizing map (SOM) neural network that adapts to the structure of a tactile sheet and spatial resolution of the input tactile device; this adaptation is faster and more robust against noise than image reconstruction techniques based on Electrical Impedance Tomography (EIT). Other advantages of this bio-inspired reconstruction algorithm are its simple mathematical formulation and the ability to self-calibrate its topographical organization without any a priori information about the input dynamics. Our results show that the spatial patterns of simple and multiple contact points can be acquired and localized with enough speed and precision for pattern recognition tasks during physical contact.
Representing objects in space is difficult because sensorimotor events are anchored in different reference frames, which can be either eye-, arm-, or target-centered. In the brain, Gain-Field (GF) neurons in the parietal cortex are involved in computing the necessary spatial transformations for aligning the tactile, visual and proprioceptive signals. In reaching tasks, these GF neurons exploit a mechanism based on multiplicative interaction for binding simultaneously touched events from the hand with visual and proprioception information.By doing so, they can infer new reference frames to represent dynamically the location of the body parts in the visual space (i.e., the body schema) and nearby targets (i.e., its peripersonal space). In this line, we propose a neural model based on GF neurons for integrating tactile events with arm postures and visual locations for constructing hand- and target-centered receptive fields in the visual space. In robotic experiments using an artificial skin, we show how our neural architecture reproduces the behaviors of parietal neurons (1) for encoding dynamically the body schema of our robotic arm without any visual tags on it and (2) for estimating the relative orientation and distance of targets to it. We demonstrate how tactile information facilitates the integration of visual and proprioceptive signals in order to construct the body space.
Touch perception is an important sense to model in humanoid robots to interact physically and socially with humans. We present a neural controller that can adapt the compliance of the robot arm in four directions using as input the tactile information from an artificial skin and as output the estimated torque for admittance control-loop reference. This adaption is done in a self-organized fashion with a neural system that learns first the topology of the tactile map when we touch it and associates a torque vector to move the arm in the corresponding direction. The artificial skin is based on a large area piezoresistive tactile device (ungridded) that changes its electrical properties in the presence of the contact. Our results show the self-calibration of a robotic arm (2 degrees of freedom) controlled in the four directions and derived combination vectors, by the soft touch on all the tactile surface, even when the torque is not detectable (force applied near the joint). The neural system associates each tactile receptive field with one direction and the correct force. We show that the tactile-motor learning gives better interactive experiments than the admittance control of the robotic arm only. Our method can be used in the future for humanoid adaptive interaction with a human partner.
Perceptual illusions across multiple modalities, such as the rubber-hand illusion, show how dynamic the brain is at adapting its body image and at determining what is part of it (the self) and what is not (others). Several research studies showed that redundancy and contingency among sensory signals are essential for perception of the illusion and that a lag of 200–300 ms is the critical limit of the brain to represent one’s own body. In an experimental setup with an artificial skin, we replicate the visuo-tactile illusion within artificial neural networks. Our model is composed of an associative map and a recurrent map of spiking neurons that learn to predict the contingent activity across the visuo-tactile signals. Depending on the temporal delay incidentally added between the visuo-tactile signals or the spatial distance of two distinct stimuli, the two maps detect contingency differently. Spiking neurons organized into complex networks and synchrony detection at different temporal interval can well explain multisensory integration regarding self-body.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.