2018
DOI: 10.1109/tcds.2017.2649225
|View full text |Cite
|
Sign up to set email alerts
|

Robotic Homunculus: Learning of Artificial Skin Representation in a Humanoid Robot Motivated by Primary Somatosensory Cortex

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
27
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
2
2

Relationship

1
7

Authors

Journals

citations
Cited by 35 publications
(30 citation statements)
references
References 49 publications
0
27
0
Order By: Relevance
“…1) Tactile homunculus (superficial schema): One component or "representation" that seems necessary is the "tactile homunculus" or superficial schema. In Hoffmann et al [31], we have obtained this homuncular representation for one half of the upper body of the iCub humanoid: Local stimulations of the skin surface were fed into a self-organizing map algorithm (SOM) that was additionally constrained such that the sequence of body parts on the output sheet mimicked that from the cortex (area 3b) -see Fig. 7B.…”
Section: Remapping Decomposed Into Modulesmentioning
confidence: 99%
See 1 more Smart Citation
“…1) Tactile homunculus (superficial schema): One component or "representation" that seems necessary is the "tactile homunculus" or superficial schema. In Hoffmann et al [31], we have obtained this homuncular representation for one half of the upper body of the iCub humanoid: Local stimulations of the skin surface were fed into a self-organizing map algorithm (SOM) that was additionally constrained such that the sequence of body parts on the output sheet mimicked that from the cortex (area 3b) -see Fig. 7B.…”
Section: Remapping Decomposed Into Modulesmentioning
confidence: 99%
“…Arrows illustrate the relationship in orientation between the skin parts and the learned map. From [31].…”
Section: Remapping Decomposed Into Modulesmentioning
confidence: 99%
“…Lanillos et al used them for constructing a probabilistic body map for self-perception [21]. Besides, [22] employed self-organizing maps for learning inverse-forward kinematics for self-perception whereas [23] and [24] used them for the learning of tactile maps and body image. In comparison, gain-field networks can combine advantageously the topological self-organization property of SOM with autoencoding and the nonlinear probabilistic mapping property of Bayesian networks based on multiplication.…”
Section: Introductionmentioning
confidence: 99%
“…The DAC architecture generates aspects of the ecological self through interoceptive processes that maintain a model of the robotÕs physical parts and the geometry of its current body pose, and exteroceptive processes that monitor the robotÕs immediate surroundings. For example, using somatotopic maps modelled on human primary sensory cortex, and techniques such as self touch, Giorgio Metta, Matej Hoffmann and colleagues have developed methods that allow the iCub to learn its own body model [26], and recalibrate its knowledge of its own geometry [54]. Additionally, by combining vision with tactile sensing and with proprioception, iCub is able to develop a sense of peripersonal space that allows it to predict contacts with objects before they happen [53].…”
Section: A Biomimetic Cognitive Architecture For the Robot Selfmentioning
confidence: 99%