2011
DOI: 10.1007/978-3-642-21227-7_53
|View full text |Cite
|
Sign up to set email alerts
|

Expression Recognition in Videos Using a Weighted Component-Based Feature Descriptor

Abstract: In this paper, we propose a weighted component-based feature descriptor for expression recognition in video sequences. Firstly, we extract the texture features and structural shape features in three facial regions: mouth, cheeks and eyes of each face image. Then, we combine these extracted feature sets using confidence level strategy. Noting that for different facial components, the contributions to the expression recognition are different, we propose a method for automatically learning different weights to co… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2012
2012
2023
2023

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 11 publications
(3 citation statements)
references
References 25 publications
0
3
0
Order By: Relevance
“…[35][36][37][38] More recent works have investigated the problem of expression analysis of videos. [39][40][41] To deal with such a dynamic process, the methods must be capable of effectively considering temporal alignment and semantic representation. 40 In order to promote and improve the development of automatic expression recognition approaches, several challenges 42,43 and datasets [44][45][46][47][48][49][50][51] have been created which aim to establish a common platform for creating and validating expression recognition methods in both controlled and realworld conditions.…”
Section: Introductionmentioning
confidence: 99%
“…[35][36][37][38] More recent works have investigated the problem of expression analysis of videos. [39][40][41] To deal with such a dynamic process, the methods must be capable of effectively considering temporal alignment and semantic representation. 40 In order to promote and improve the development of automatic expression recognition approaches, several challenges 42,43 and datasets [44][45][46][47][48][49][50][51] have been created which aim to establish a common platform for creating and validating expression recognition methods in both controlled and realworld conditions.…”
Section: Introductionmentioning
confidence: 99%
“…Happiness and surprise all keep good performance while the recognition accuracy of other facial expressions increase significantly. Then, we compare the performance of our algorithm with the some state-of-the-art facial expression recognition approaches, including active appearance models (AAM) tracked similarity-normalized shape (SPTS) and canonical appearance (CAPP) features with linear SVM [21], constrained local models (CLM) tracked SPTS and SAPP features with linear SVM [22], emotion avatar image (EAI) and local phase quantization (LPQ) features with linear SVM [23] and weighted component-based feature descriptor algorithm [24]. Table. IV gives the classification accuracy of the state-of-theart facial expression recognition algorithms and the proposed method.…”
Section: B Experiments On the Extended Cohn-kanade Datasetmentioning
confidence: 99%
“…The main idea of the Fisher criterion is to learn the weights of each region based on keeping the within-class scatter as small as possible and between-class scatter as large as possible [16,17]. For the C class problem, the similarities of the different samples from the same class form the within-class scatter, while the difference samples from different classes form the between-class scatter.…”
Section: Weights Learned Based On Fisher Criterionmentioning
confidence: 99%