2010
DOI: 10.1007/978-3-642-15193-4_59
|View full text |Cite
|
Sign up to set email alerts
|

Autonomous Development of Social Referencing Skills

Abstract: Abstract. In this work, we are interested in understanding how emotional interactions with a social partner can bootstrap increasingly complex behaviors such as social referencing. Our idea is that social referencing as well as facial expression recognition can emerge from a simple sensori-motor system involving emotional stimuli. Without knowing that the other is an agent, the robot is able to learn some complex tasks if the human partner has some "empathy" or at least "resonate" with the robot head (low leve… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2010
2010
2019
2019

Publication Types

Select...
5
2
1

Relationship

3
5

Authors

Journals

citations
Cited by 12 publications
(7 citation statements)
references
References 17 publications
0
7
0
Order By: Relevance
“…The emotion can be provided by a variety of modalities of emotional expressions, such as facial expressions, sound, gestures, etc. We choose to explore the facial expressions since they are an excellent way to communicate important information in ambiguous situations [3] but also because we can show that learning to recognize facial expression can be autonomous and very fast [2] which was not evident at first. For this purpose, we were interested in understanding how babies learn to recognize facial expressions without having a teaching signal allowing to associate for instance the vision of an "happy face" with their own internal emotional state of happiness [7].…”
Section: Introductionmentioning
confidence: 99%
“…The emotion can be provided by a variety of modalities of emotional expressions, such as facial expressions, sound, gestures, etc. We choose to explore the facial expressions since they are an excellent way to communicate important information in ambiguous situations [3] but also because we can show that learning to recognize facial expression can be autonomous and very fast [2] which was not evident at first. For this purpose, we were interested in understanding how babies learn to recognize facial expressions without having a teaching signal allowing to associate for instance the vision of an "happy face" with their own internal emotional state of happiness [7].…”
Section: Introductionmentioning
confidence: 99%
“…It is challenging to learn to recognize facial expressions without a teaching signal that enables what is perceived by the robot to be associated with its internal state (e.g., associating the vision of a "happy face" with an internal emotional state of happiness [37]). This issue has been investigated in robotic experiments [13]. In this present study, we show that as in the facial expression recognition problem, posture recognition can be autonomously learned using sensory-motor architecture.…”
Section: B Robot Imitation Skillsmentioning
confidence: 56%
“…In this case, the human partner communicates with the robot through imitation. In previous studies, we showed that imitation can be used as a communication tool in online facial expression learning [10], [11] and to bootstrap complex capabilities such as social referencing or joint attention [13], [12]. These experiments showed that a robotic head can autonomously develop facial expression recognition without being given a teaching signal.…”
Section: B Robot Imitation Skillsmentioning
confidence: 97%
“…He/she provides a visual feedback correlated with the robot internal state. A classical online conditioning mechanism is then sufficient to learn the correct association [21]. Finally, Marin et al underlined, in their psychological studies, that motor resonance between robots (humanoid) and humans could optimize the social competence of human-robot interactions [22].…”
Section: Theoretical Contextmentioning
confidence: 99%