2011
DOI: 10.1007/978-3-642-24571-8_47
|View full text |Cite
|
Sign up to set email alerts
|

Multiple Classifier Systems for the Classification of Audio-Visual Emotional States

Abstract: Abstract. Research activities in the field of human-computer interaction increasingly addressed the aspect of integrating some type of emotional intelligence. Human emotions are expressed through different modalities such as speech, facial expressions, hand or body gestures, and therefore the classification of human emotions should be considered as a multimodal pattern recognition problem. The aim of our paper is to investigate multiple classifier systems utilizing audio and visual features to classify human e… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
52
1

Year Published

2013
2013
2020
2020

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 100 publications
(58 citation statements)
references
References 30 publications
1
52
1
Order By: Relevance
“…Classification -For kNN classification we used L 1 distance and reported results for three different k values (5,7,9). For SVM, we used linear and radial basis function (RBF) kernels, and trained one-vs-one classifiers (with probabilistic output) for each expression.…”
Section: Discrete and Categorical Facial Expression Recognitionmentioning
confidence: 99%
“…Classification -For kNN classification we used L 1 distance and reported results for three different k values (5,7,9). For SVM, we used linear and radial basis function (RBF) kernels, and trained one-vs-one classifiers (with probabilistic output) for each expression.…”
Section: Discrete and Categorical Facial Expression Recognitionmentioning
confidence: 99%
“…For AVEC 2011: UCL (Meng and Bianchi-Berthouze 2011), Uni-ULM (Glodek et al 2011), GaTechKim (Kim et al 2011), LSU (Calix et al 2011), Waterloo (Sayedelahl et al 2011), NLPR (Pan et al 2011), USC (Ramirez et al 2011), GaTechSun (Sun and Moore 2011), I2R-SCUT (Cen et al 2011), UCR (Cruz et al 2011) and UMontreal (Dahmane and Meunier 2011a, b). For AVEC 2012: UPMC-UAG (Nicolle et al 2012), Supelec-Dynamixyz-MinesTelecom (Soladie et al 2012), UPenn (Savran et al 2012a), USC (Ozkan et al 2012), Delft (van der Maaten 2012), Uni-ULM (Glodek et al 2012), Waterloo2 (Fewzee and Karray 2012).…”
Section: Audio/visual Emotion Challenge 2011/2012mentioning
confidence: 99%
“…These two methods improve the Gabor energy filter, but do not address both generalization and compactness of the representation. Out of the top six approaches for AVEC 2011, only one approach used a Gabor energy filter [17]. Approaches preferred LPQ, LBP or active appearance models.…”
Section: Motivationmentioning
confidence: 99%