Interspeech 2013 2013
DOI: 10.21437/interspeech.2013-56
|View full text |Cite
|
Sign up to set email alerts
|

The INTERSPEECH 2013 computational paralinguistics challenge: social signals, conflict, emotion, autism

Abstract: The INTERSPEECH 2013 Computational Paralinguistics Challenge provides for the first time a unified test-bed for Social Signals such as laughter in speech. It further introduces conflict in group discussions as a new task and deals with autism and its manifestations in speech. Finally, emotion is revisited as task, albeit with a broader range of overall twelve enacted emotional states. In this paper, we describe these four Sub-Challenges, their conditions, baselines, and a new feature set by the openSMILE toolk… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
127
1

Year Published

2014
2014
2022
2022

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 452 publications
(138 citation statements)
references
References 26 publications
0
127
1
Order By: Relevance
“…Other features include a harmonics to noise ratio, which was found unrelated to arousal [ 44 ], and jitter, which showed a positive correlation with depression [ 45 ]. Arousal has been easiest to detect based on voice acoustics [ 46 ]. Discrete emotion recognition based on these features in deep neural networks has also been successful [ 47 ].…”
Section: Introductionmentioning
confidence: 99%
“…Other features include a harmonics to noise ratio, which was found unrelated to arousal [ 44 ], and jitter, which showed a positive correlation with depression [ 45 ]. Arousal has been easiest to detect based on voice acoustics [ 46 ]. Discrete emotion recognition based on these features in deep neural networks has also been successful [ 47 ].…”
Section: Introductionmentioning
confidence: 99%
“…The COMPARE acoustic feature set is a well established set which has shown to give consistent insights for related domains of speech analysis (Stappen et al, 2019), including states of stress (Baird et al, 2019;Stappen et al, 2021), and anxiety (Baird et al, 2020). The COMPARE feature set is also used as the baseline feature for the INTERSPEECH COMPARE challenges since 2013 (Schuller et al, 2013), and further extended in 2016 (Schuller et al, 2016). As with the 2021 COMPARE challenge (Schuller et al, 2021), we extract the features from the entire audio samples, resulting in feature sets of 6,373 static features, which are derived from the calculation of staticfunctionals obtained from low-level descriptor (LLD) contours (Eyben et al, 2013;Schuller et al, 2013).…”
Section: Featuresmentioning
confidence: 99%
“…The COMPARE feature set is also used as the baseline feature for the INTERSPEECH COMPARE challenges since 2013 (Schuller et al, 2013), and further extended in 2016 (Schuller et al, 2016). As with the 2021 COMPARE challenge (Schuller et al, 2021), we extract the features from the entire audio samples, resulting in feature sets of 6,373 static features, which are derived from the calculation of staticfunctionals obtained from low-level descriptor (LLD) contours (Eyben et al, 2013;Schuller et al, 2013).…”
Section: Featuresmentioning
confidence: 99%
“…Acoustic features of the Emotion data set are extracted using OpenSmile 1 with the computational paralinguistic challenge's (COMPARE-2013) feature set (Schuller et al, 2013). Sentence embedding features are extracted with a Chinese RoBERTa pretrained model 2 .…”
Section: Real-world Data Setsmentioning
confidence: 99%