2006
DOI: 10.1007/11821830_8
|View full text |Cite
|
Sign up to set email alerts
|

Perception of Blended Emotions: From Video Corpus to Expressive Agent

Abstract: Abstract. Real life emotions are often blended and involve several simultaneous superposed or masked emotions. This paper reports on a study on the perception of multimodal emotional behaviors in Embodied Conversational Agents. This experimental study aims at evaluating if people detect properly the signs of emotions in different modalities (speech, facial expressions, gestures) when they appear to be superposed or masked. We compared the perception of emotional behaviors annotated in a corpus of TV interviews… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
15
0

Year Published

2007
2007
2017
2017

Publication Types

Select...
6
4

Relationship

2
8

Authors

Journals

citations
Cited by 29 publications
(15 citation statements)
references
References 27 publications
0
15
0
Order By: Relevance
“…Using a multimodal corpus exploratory approach, Buisine et al [5] defined a method for replaying annotated gestures and facial expressions. The authors describe an experimental study exploring how subjects perceive different replays but do not compare how facial and postural expressions are perceived individually vs. jointly.…”
Section: Nonverbal Expression Of Emotion In Virtual Charactersmentioning
confidence: 99%
“…Using a multimodal corpus exploratory approach, Buisine et al [5] defined a method for replaying annotated gestures and facial expressions. The authors describe an experimental study exploring how subjects perceive different replays but do not compare how facial and postural expressions are perceived individually vs. jointly.…”
Section: Nonverbal Expression Of Emotion In Virtual Charactersmentioning
confidence: 99%
“…However, the animation displaying a simple expression was perceived as the closest to the original by 33% of the participants in the audio condition, and by 61% of the participants in the no audio condition (see Table 2). Comparing two animations of complex facial expressions the one generated with our model of complex facial expressions received 17% (audio) and 9% (no audio condition) in turn while the manually defined complex facial expressions were better in this test (24% and 20%) (see Buisine et al, 2006 for detailed results).…”
Section: Discussionmentioning
confidence: 83%
“…synthesising head position and eye gaze is also presented in the paper. Objective evaluation was offered via the "copy-synthesis" method Buisine et al [2006], where synthesised movies are subjectively compared with hand crafted animation. Results indicate that the automatic approach is "satisfactory" and that participants were able to identify at least some of the expressions displayed.…”
Section: Concatenative Approachesmentioning
confidence: 99%