2019
DOI: 10.1007/s12369-019-00524-z
|View full text |Cite
|
Sign up to set email alerts
|

Multimodal Integration of Emotional Signals from Voice, Body, and Context: Effects of (In)Congruence on Emotion Recognition and Attitudes Towards Robots

Abstract: Humanoid social robots have an increasingly prominent place in today's world. Their acceptance in social and emotional human-robot interaction (HRI) scenarios depends on their ability to convey well recognized and believable emotional expressions to their human users. In this article, we incorporate recent findings from psychology, neuroscience, human-computer interaction, and HRI, to examine how people recognize and respond to emotions displayed by the body and voice of humanoid robots, with a particular emph… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
60
3
1

Year Published

2019
2019
2021
2021

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 125 publications
(80 citation statements)
references
References 67 publications
2
60
3
1
Order By: Relevance
“…Similarly, future research should investigate the influence of faces on label ratings, to gain a more complete understanding of the potential bidirectional relationship between language and emotional expressions. We expect that label ratings would be influenced by paired face information; similar bidirectional relationships have previously been shown between face-body and face-voice pairings [74][75][76][77][78][79][80], and we believe this would hold for face-language pairings.…”
Section: Plos Onementioning
confidence: 58%
“…Similarly, future research should investigate the influence of faces on label ratings, to gain a more complete understanding of the potential bidirectional relationship between language and emotional expressions. We expect that label ratings would be influenced by paired face information; similar bidirectional relationships have previously been shown between face-body and face-voice pairings [74][75][76][77][78][79][80], and we believe this would hold for face-language pairings.…”
Section: Plos Onementioning
confidence: 58%
“…The robot behavioral multimodality refers to coordinating and combining different modalities of communication in the robot (agent) behavior, which has been a challenging research topic over the last years [38,84]. In facial expressions and gestures coordination, among others, Clavel et al [23] discussed the positive effect of facial and bodily expressions on the affective expressivity of a virtual character (and consequently emotion recognition), and Costa et al [25] proved that gestures can effectively help in recognizing the facial expressions of a robot.…”
Section: Related Workmentioning
confidence: 99%
“…Naturally, the emotion perception of humans is not just determined by one type of information; it is triggered by a multitude of factors or signals emitted from others. Many studies have utilized multimodality (i.e., visual, audio, and text) to improve the performance of emotion recognition [ 19 , 21 , 22 , 23 , 51 ]. Zhou et al [ 51 ] and Tripathi et al [ 19 ] modeled the relationships among text, visual, and audio modalities by deep learning methods to improve performance.…”
Section: Related Workmentioning
confidence: 99%
“…Naturally, emotion perception by humans is not decided by just one type of information; it is triggered by a multitude of factors or signals emitted from others. By investigating such factors, many researchers have proposed multi-modality approaches to improve the performance of emotion recognition [ 21 , 22 , 23 , 24 , 25 , 26 ].…”
Section: Introductionmentioning
confidence: 99%