2009 IEEE International Conference on Acoustics, Speech and Signal Processing 2009
DOI: 10.1109/icassp.2009.4960695
|View full text |Cite
|
Sign up to set email alerts
|

Detecting real life anger

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
26
0
1

Year Published

2009
2009
2016
2016

Publication Types

Select...
5
2

Relationship

2
5

Authors

Journals

citations
Cited by 44 publications
(28 citation statements)
references
References 5 publications
1
26
0
1
Order By: Relevance
“…A Kappa value of 0 means no agreement, values between 0.4 and 0.7 are usually regarded as fair agreement and values above denote excellent agreement. Our labelers reached Kappa values between 0.79 for an early data collection [4] and 0.55 for the data set described here [6]. This indicates (not surprisingly) that agreement depends strongly on the data as well as the labelers.…”
Section: Data Labelingsupporting
confidence: 60%
See 1 more Smart Citation
“…A Kappa value of 0 means no agreement, values between 0.4 and 0.7 are usually regarded as fair agreement and values above denote excellent agreement. Our labelers reached Kappa values between 0.79 for an early data collection [4] and 0.55 for the data set described here [6]. This indicates (not surprisingly) that agreement depends strongly on the data as well as the labelers.…”
Section: Data Labelingsupporting
confidence: 60%
“…The progress in this work was reported in [7,4,17,5,6]. During this period, we experimented with different acoustic feature sets and different classifier algorithms.…”
Section: • Believable Agents Artificial Humansmentioning
confidence: 99%
“…Previous studies on both corpora yielded a much lower performance compared to our new findings. The former system described in (Schmitt et al, 2009) with the English database reached 72.6% f1 while the system described in (Burkhardt et al, 2009) developed for the German database reached 70% f1. The performance gain on the training set of respectively 5.6% and 4.7% f1 in our study can be attributed to the employment of the enhanced feature sets and the feature selection by IGR filtering.…”
Section: 3mentioning
confidence: 99%
“…For each turn, 3 labelers assigned one of the following labels: not angry, not sure, slightly angry, clear anger, clear rage or marked the turns as non applicable when encountering garbage. The labels were mapped onto two cover classes by clustering according to a threshold over the average of all voters' labels as described in (Burkhardt et al, 2009). Following Davies extension of Cohen's Kappa (Davies and Fleiss, 1982) for multiple labelers we obtain a value of = 0.52 which corresponds to moderate inter labeler agreement (Steidl et al, 2005).…”
Section: Selected Corporamentioning
confidence: 99%
“…Anger was detected in recordings from a German voice portal in [6]. Support vector machines (SVMs) and Gaussian mixture model (GMM)-based classifiers were applied to pitch, energy, duration, and spectral-related features.…”
Section: Introductionmentioning
confidence: 99%