Proceedings of the 10th International Conference on Agents and Artificial Intelligence 2018
DOI: 10.5220/0006611601750182
|View full text |Cite
|
Sign up to set email alerts
|

Speech Emotion Recognition: Methods and Cases Study

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
29
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 40 publications
(30 citation statements)
references
References 10 publications
1
29
0
Order By: Relevance
“…As shown in Table 1, SVM classifier yields better results above 81%, with feature combination of MFCC and MS for Berlin database. Our results have improved compared to previous results in [21] because we changed the SVM parameters for each type of features to develop a good model.…”
Section: Results and Analysismentioning
confidence: 70%
See 2 more Smart Citations
“…As shown in Table 1, SVM classifier yields better results above 81%, with feature combination of MFCC and MS for Berlin database. Our results have improved compared to previous results in [21] because we changed the SVM parameters for each type of features to develop a good model.…”
Section: Results and Analysismentioning
confidence: 70%
“…In previous work [21], we present a system for the recognition of «seven acted emotional states (anger, disgust, fear, joy, sadness, and surprise)». To do that, we extracted the MFCC and MS features and used them to train three different machine learning paradigms (MLR, SVM, and RNN).…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…They extracted 16 low-level features and tested their methods over several machine learning algorithms, including artificial neural network (ANN), classification and regression tree (CART), support vector machine (SVM) and K-nearest neighbor (KNN) on berlin emotion speech database (Emo-DB). Kerkeni et al [11] proposed a speech emotion recognition method that extracted MFCC with modulation spectral (MS) features and used recurrent neural network (RNN) learning algorithm to classify seven emotions from Emo-DB and Spanish database. The authors in [15] proposed a speech emotion recognition model where glottis was used for compensation of glottal features.…”
Section: Related Studiesmentioning
confidence: 99%
“…Practical applications of emotion recognition systems can be found in many domains such as audio/video surveillance [6], web-based learning, commercial applications [7], clinical studies, entertainment [8], banking [9], call centers [10], computer games [11] and psychiatric diagnosis [12]. In addition, other real applications include remote tracking of persons in a distressed phase, communication between human and robots, mining sentiments of sport fans and customer care services [13], where emotion is perpetually expressed.…”
Section: Introductionmentioning
confidence: 99%