2017
DOI: 10.1007/978-3-319-59740-9_27
|View full text |Cite
|
Sign up to set email alerts
|

Exploring the Physiological Basis of Emotional HRI Using a BCI Interface

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2020
2020
2020
2020

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 18 publications
0
2
0
Order By: Relevance
“…In particular, Pepper is a small humanoid robot, which is provided with microphones, 3D sensors, touch sensors, gyroscope, RGB camera, and touch screen placed on the chest of the robot, among other sensors. Through the ALMood Module, Pepper is able to process perceptions from sensors (e.g., interlocutors' gaze, voice intonation, or linguistic semantics of speech) to provide an estimation of the instantaneous emotional state of the speaker, surrounding people, and ambiance mood [42,43]. However, Pepper communication and its emotional expression is mainly carried out through speech consequence of limitations such as a static face, unrefined gestures, and other nonverbal cues, which are not as flexible as human standards [44], for instance while we consider Figure 4, which is a picture displaying a sad Pepper.…”
Section: Embodied Service Robots Study Casesmentioning
confidence: 99%
See 1 more Smart Citation
“…In particular, Pepper is a small humanoid robot, which is provided with microphones, 3D sensors, touch sensors, gyroscope, RGB camera, and touch screen placed on the chest of the robot, among other sensors. Through the ALMood Module, Pepper is able to process perceptions from sensors (e.g., interlocutors' gaze, voice intonation, or linguistic semantics of speech) to provide an estimation of the instantaneous emotional state of the speaker, surrounding people, and ambiance mood [42,43]. However, Pepper communication and its emotional expression is mainly carried out through speech consequence of limitations such as a static face, unrefined gestures, and other nonverbal cues, which are not as flexible as human standards [44], for instance while we consider Figure 4, which is a picture displaying a sad Pepper.…”
Section: Embodied Service Robots Study Casesmentioning
confidence: 99%
“…In summary, in the above revised EAIE cases (emoticon-based expression, iPadrone/iPhonoid, and Pepper), emotions are generated through an ad hoc architecture, which considers emotions and moods that are determined by multimodal data. A cartoon of these works is presented in Figure 5, displaying on (a) the work of [33] on (b) the work of [32, [35][36][37], and on (c) Pepper the robot as described in [42][43][44].…”
Section: Study Cases Through the Emoji Communication Lensmentioning
confidence: 99%