2011
DOI: 10.1007/978-3-642-24571-8_48
|View full text |Cite
|
Sign up to set email alerts
|

Investigating the Use of Formant Based Features for Detection of Affective Dimensions in Speech

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2013
2013
2015
2015

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 19 publications
(9 citation statements)
references
References 6 publications
0
9
0
Order By: Relevance
“…Since the median duration of the turns in the SEMAINE corpus is 2.76 secs, the delay is significant and the resulting labels do not represent the actual expressive behaviors. We hypothesize that this is one of the reasons of the low emotion recognition performance reported in classification studies on this database [33], [34], [35].…”
Section: Related Workmentioning
confidence: 92%
“…Since the median duration of the turns in the SEMAINE corpus is 2.76 secs, the delay is significant and the resulting labels do not represent the actual expressive behaviors. We hypothesize that this is one of the reasons of the low emotion recognition performance reported in classification studies on this database [33], [34], [35].…”
Section: Related Workmentioning
confidence: 92%
“…Here we compare our audio only approach against the baselines as well as three state-of-the-art approaches [3,5,2]. The results are shown in Table 1.…”
Section: Audio Sub-challengementioning
confidence: 97%
“…There has been extensive work on human emotion recognition in recent years [2,3,4,5]. Recognizing that human emotion varies dynamically, several works have used techniques such as HMMs [3] and CRFs (and its variations) [2] for analyzing human emotions.…”
Section: Introductionmentioning
confidence: 99%
“…For AVEC 2011: UCL (Meng and Bianchi-Berthouze 2011), Uni-ULM (Glodek et al 2011), GaTechKim (Kim et al 2011), LSU (Calix et al 2011), Waterloo (Sayedelahl et al 2011), NLPR (Pan et al 2011), USC (Ramirez et al 2011), GaTechSun (Sun and Moore 2011), I2R-SCUT (Cen et al 2011), UCR (Cruz et al 2011) and UMontreal (Dahmane and Meunier 2011a, b). For AVEC 2012: UPMC-UAG (Nicolle et al 2012), Supelec-Dynamixyz-MinesTelecom (Soladie et al 2012), UPenn (Savran et al 2012a), USC (Ozkan et al 2012), Delft (van der Maaten 2012), Uni-ULM (Glodek et al 2012), Waterloo2 (Fewzee and Karray 2012).…”
Section: Audio/visual Emotion Challenge 2011/2012mentioning
confidence: 99%