Proceedings. (ICASSP '05). IEEE International Conference on Acoustics, Speech, and Signal Processing, 2005.
DOI: 10.1109/icassp.2005.1415116
|View full text |Cite
|
Sign up to set email alerts
|

Meta-Classifiers in Acoustic and Linguistic Feature Fusion-Based Affect Recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
39
0
2

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 50 publications
(42 citation statements)
references
References 4 publications
1
39
0
2
Order By: Relevance
“…Previous work [2] has indicated that an average emotion recognition rate of 84% is achieved in speaker-dependent experiments, whereas for the speaker-independent case the emotion recognition drops to 60%. The aforementioned conclusion is also verified in [56] for 10 different classifiers. The averaged emotion recognition performance equals 89.49% for the speakerdependent case, whereas it drops to 71.29% for the speaker-independent one.…”
Section: Speaker-independent Experimental Protocolsupporting
confidence: 61%
See 1 more Smart Citation
“…Previous work [2] has indicated that an average emotion recognition rate of 84% is achieved in speaker-dependent experiments, whereas for the speaker-independent case the emotion recognition drops to 60%. The aforementioned conclusion is also verified in [56] for 10 different classifiers. The averaged emotion recognition performance equals 89.49% for the speakerdependent case, whereas it drops to 71.29% for the speaker-independent one.…”
Section: Speaker-independent Experimental Protocolsupporting
confidence: 61%
“…75 features were found to have the best performance (emotion recognition accuracy equal to 97.0%) for the linear SVM. This procure has been verified to be successful in [56]. For reasons of homogeneity the same number of features is retained for the remaining levels of the psychologicallyinspired binary cascade classification schema.…”
Section: Feature Selectionmentioning
confidence: 99%
“…If confidences are provided on lower level, they can be exploited as well. Still, the gain over single strong classifiers such as SVM may not justify the extra computational costs [178].…”
Section: Classificationmentioning
confidence: 99%
“…Ensembles of classifiers [183,184,178,130,129] combine their individual strengths, and might improve training stability. There exists a number of different approaches to combine classifiers.…”
Section: Classificationmentioning
confidence: 99%
“…Модель ELM недавно была успешно использована для задачи многомодального (аудиовизуального) распознавания эмоций в реальных условиях (in the wild) [14]. В недавней обзорной статье также отмечалось [5], что некоторые методы, устоявшиеся в области машинного обучения [15], не очень обдуманно применяются к задаче РЭР, например, объединение множественных моделей на уровне позднего объединения информации. [16]) по двум конкурсам/направлениям (sub-challenges): наиболее точная классификация эмоций в речи (Classifier Performance Sub-Challenge) [17] (здесь и далее приведены ссылки на статьи, описывающие системы, победившие в каждом из кон-курсов) и конкурс открытых проектов по автоматическому паралингвистическому анализу речи (Open Performance Sub-Challenge) [18].…”
Section: архитектура базовой системы паралингвистического анализа речиunclassified