2011
DOI: 10.1016/j.patcog.2010.09.020
|View full text |Cite
|
Sign up to set email alerts
|

Survey on speech emotion recognition: Features, classification schemes, and databases

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

5
747
0
37

Year Published

2012
2012
2023
2023

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 1,719 publications
(859 citation statements)
references
References 90 publications
5
747
0
37
Order By: Relevance
“…Even humans may have difficulty describing how they feel, distinguishing between emotions or remembering how they felt only minutes earlier. A survey verifies the differences among human emotional perception [3], whereas a more detailed study is available in [15]. For the latter case, 20 subjects describe their perception of 6 emotions, namely happiness, hot anger, neutral, interest, panic, and sadness from the Emotional Prosody Speech and Transcripts (EPST) corpus.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations
“…Even humans may have difficulty describing how they feel, distinguishing between emotions or remembering how they felt only minutes earlier. A survey verifies the differences among human emotional perception [3], whereas a more detailed study is available in [15]. For the latter case, 20 subjects describe their perception of 6 emotions, namely happiness, hot anger, neutral, interest, panic, and sadness from the Emotional Prosody Speech and Transcripts (EPST) corpus.…”
Section: Related Workmentioning
confidence: 99%
“…The importance of context information is emphasized in [12]. In an additional survey it is stated that besides features, also the applied classifier plays a significant role in emotion recognition performance [3]. For example, Gaussian mixture models (GMMs) cannot model the temporal structure of the data, whereas artificial neural networks classification accuracy seems to be fairly low when compared to other classifiers.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…So to make these two binary images mutually exclusive we define the following (6) Here is a weak edge response and is a strong edge response So if we combine wear edge response to the strong edge response then we will get valid edge points so final image result is…”
Section: Aslc Edge Detection Marr-hildreth Hystresis Analysis Edge Fementioning
confidence: 99%