2015
DOI: 10.1016/j.neunet.2014.08.009
|View full text |Cite
|
Sign up to set email alerts
|

Exploiting the gain-modulation mechanism in parieto-motor neurons: Application to visuomotor transformations and embodied simulation

Abstract: The so-called self-other correspondence problem in imitation demands to find the transformation that maps the motor dynamics of one partner to our own. This requires a general purpose sensorimotor mechanism that transforms an external fixation-point (partner's shoulder) reference frame to one's own body-centered reference frame. We propose that the mechanism of gain-modulation observed in parietal neurons may generally serve these types of transformations by binding the sensory signals across the modalities wi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

2
6
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
3
3
2

Relationship

5
3

Authors

Journals

citations
Cited by 11 publications
(8 citation statements)
references
References 67 publications
2
6
0
Order By: Relevance
“…At reverse, knowing the variation in the sensory maps, it is possible to estimate which transformation (hidden variable) is the most probable to have generated these outputs. This property of auto-encoders can serve for active inference and action observation, which are also features observed in parietal neurons and in the mirror neurons system [28], [29], [30] for affordances generatation [31], [32] and also sensorimotor adaptations as during tool-use [18]. During grasping, the prediction done by the motor units of the hidden layer of the auto-encoder can serve to "reverseengineer" the hand preshaping based on visual information; this idea is also found in Rumelhart or Kawato's forwardinverse models [33], [34] as well as in the "virtual finger hypothesis" by Arbib who proposed to explain grasp affordance and the assignement of the orientation and of the power grip of the real fingers during grasping [31], [32], [35].…”
Section: Discussionmentioning
confidence: 82%
See 1 more Smart Citation
“…At reverse, knowing the variation in the sensory maps, it is possible to estimate which transformation (hidden variable) is the most probable to have generated these outputs. This property of auto-encoders can serve for active inference and action observation, which are also features observed in parietal neurons and in the mirror neurons system [28], [29], [30] for affordances generatation [31], [32] and also sensorimotor adaptations as during tool-use [18]. During grasping, the prediction done by the motor units of the hidden layer of the auto-encoder can serve to "reverseengineer" the hand preshaping based on visual information; this idea is also found in Rumelhart or Kawato's forwardinverse models [33], [34] as well as in the "virtual finger hypothesis" by Arbib who proposed to explain grasp affordance and the assignement of the orientation and of the power grip of the real fingers during grasping [31], [32], [35].…”
Section: Discussionmentioning
confidence: 82%
“…Their modeling corresponds to multiplicative Radial Basis Functions (RBFs) or sigma-pi networks [13], [14] to learn sensorimotor transformations. In image processing, these networks are known as gated networks, which have been recently re-investigated in [15], [16] for affine transformations and in developmental robotics [17], [18], [19] for multimodal integration. These multiplicative networks can serve to learn nonlinear transformations, which are common problems in robotics to compute direct mapping and inverse kinematics.…”
Section: Introductionmentioning
confidence: 99%
“…However, this work does not address the problem of coordinate transformations between different modalities in multisensory integration. Previous works done by the authors model the mechanism of gain-modulation found in parietal neurons for audio-visual and visuomotor coordinate transformations4344. In future works, one attempt will be to extend this model to coordinate tranformation of visuo-tactile and proprioceptive reference frames for simulating RHI with a robotic hand.…”
Section: Discussionmentioning
confidence: 98%
“…By doing so, the neural networks can learn a multimodal body image useful for physical and social interactions [25]. In future works, we are planning to extend our results by adding vision and by adding more degrees of freedom to control the robot by touch and visually [26], [27], [28].…”
Section: Discussionmentioning
confidence: 84%