2009 Fourth International Conference on Digital Telecommunications 2009
DOI: 10.1109/icdt.2009.30
|View full text |Cite
|
Sign up to set email alerts
|

Statistical Evaluation of Speech Features for Emotion Recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
13
0

Year Published

2012
2012
2018
2018

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 31 publications
(13 citation statements)
references
References 15 publications
0
13
0
Order By: Relevance
“…However, emotion recognition is a challenging task, even for humans [13] [33], as is verified by the related literature [8] [20] [25]. This can be attributed to a multitude of reasons.…”
Section: Introductionmentioning
confidence: 64%
See 2 more Smart Citations
“…However, emotion recognition is a challenging task, even for humans [13] [33], as is verified by the related literature [8] [20] [25]. This can be attributed to a multitude of reasons.…”
Section: Introductionmentioning
confidence: 64%
“…According to [39], for the feature of pitch range it is true that anger, happiness, and fear have a much wide pitch range, sadness has a slightly narrower, whereas disgust has a slightly wide one. Both pitch and energy for happiness and anger are usually higher than sadness [25]. Fear and anger present a high energy lever, whereas sadness demonstrates a low one [50].…”
Section: Feature Extractionmentioning
confidence: 99%
See 1 more Smart Citation
“…The authors calculate features and then apply algorithm to select most relevant features and show that prosodic features are more helpful in detection of emotion than spectral features. Another framework in which they extracted 133 speech features and aimed to identify feature set that would be appropriate to descriminate between seven emotions based on speech processing [7]. They use Neural network classifier with 35 input vectors and tested their model using Berlin dataset that include speaker dependant and speaker independent instances.…”
Section: Resultsmentioning
confidence: 99%
“…Numerous acoustic features, such as prosodic features [4][5][6], spectral features [7][8][9], and voice quality [10,11], are applied to emotion recognition. Some emotions present similarities; thus, using only one type of acoustic feature to recognize emotions is inadequate.…”
Section: Introductionmentioning
confidence: 99%