2015
DOI: 10.1007/s10579-015-9300-0
|View full text |Cite
|
Sign up to set email alerts
|

The USC CreativeIT database of multimodal dyadic interactions: from speech and full body motion capture to continuous emotional annotations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
31
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 54 publications
(33 citation statements)
references
References 43 publications
0
31
0
Order By: Relevance
“…More specifically, several problems manifested in absolute ratings such as sensitivity to the range, anchor point and sequential effects (Parthasarathy et al, 2016) among different annotators would cause noise in the first-order delta. Interrater agreement refers to agreement on categorical or numeric annotations among annotators, which is normally measured by Cronbach's α (McKeown et al, 2012;Ringeval et al, 2013;Metallinou et al, 2015).…”
Section: Delta Emotion Ground Truthmentioning
confidence: 99%
See 1 more Smart Citation
“…More specifically, several problems manifested in absolute ratings such as sensitivity to the range, anchor point and sequential effects (Parthasarathy et al, 2016) among different annotators would cause noise in the first-order delta. Interrater agreement refers to agreement on categorical or numeric annotations among annotators, which is normally measured by Cronbach's α (McKeown et al, 2012;Ringeval et al, 2013;Metallinou et al, 2015).…”
Section: Delta Emotion Ground Truthmentioning
confidence: 99%
“…With growing awareness of this, there has been an increasing number of groups considering the time course of emotions by employing continuously annotated corpora (Gunes and Schuller, 2013). Examples of this kind of corpora are SEMAINE , CreativeIT (Metallinou et al, 2015), RECOLA (Ringeval et al, 2013) and Belfast Naturalistic Database (Sneddon et al, 2012), where emotional ratings (e.g., arousal and valence) are evaluated continuously using real-time annotation tools such as Feeltrace (Cowie and Douglas-Cowie, 2000), Gtrace (Cowie et al, 2013), and ANNEMO (Ringeval et al, 2013), based on audio and video signals. Based on continuous annotation, a number of systems have been built with the intention of predicting the ratings at a fine temporal granularity, for example, the Audio-visual Emotion Challenge (AVEC) (Schuller et al, 2011;Ringeval et al, 2015b), but overall performances are not always satisfactory.…”
Section: Introductionmentioning
confidence: 99%
“…LGBPTOP has been used as baseline feature in automatic affect recognition challenge [3] [11]. Video geometric features include identifying landmarks on the face [11] or shoulder [12] or the whole body [13].…”
Section: Related Workmentioning
confidence: 99%
“…An interactive emotional dyadic motion capture, named the USC IEMOCAP database, is presented in [19], which is a multimodal and multi-speaker database of improvised and scripted dyadic interactions. The USC Cre-ativeIT database contains full-body motion capture information in the context of expressive theatrical improvisations [9,20]. The database is annotated using the valence, activation and dominance attributes, as well as the theater performance ratings such as interest and naturalness.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Through social signals of agreement and disagreement in a communicative interaction participants can share convergent or divergent opinions, proposals, goals, attitudes and feelings. In recent literature common types of such social interaction are the group meeting scenarios [2][3][4][5], political debates [6][7][8], theatrical improvisations [9] and broadcast conversations [10,11].…”
Section: Introductionmentioning
confidence: 99%