2014
DOI: 10.1016/j.patrec.2013.10.026
|View full text |Cite
|
Sign up to set email alerts
|

Feature selection for improved 3D facial expression recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
6
0
5

Year Published

2015
2015
2021
2021

Publication Types

Select...
3
3
2

Relationship

1
7

Authors

Journals

citations
Cited by 39 publications
(11 citation statements)
references
References 8 publications
0
6
0
5
Order By: Relevance
“…For all the experiments, so far described in this paper, is has been assumed that the training set is representative of the possible different head poses in the test dataset. However, collecting the training data for all possible different head poses is not feasible in practice, also robustly estimating the head pose orientation from 2D data is still a challenge [3].…”
Section: Varied Head Posesmentioning
confidence: 99%
See 1 more Smart Citation
“…For all the experiments, so far described in this paper, is has been assumed that the training set is representative of the possible different head poses in the test dataset. However, collecting the training data for all possible different head poses is not feasible in practice, also robustly estimating the head pose orientation from 2D data is still a challenge [3].…”
Section: Varied Head Posesmentioning
confidence: 99%
“…These features can be either hand-designed or learned from the training data. It is known that some features are more critical for analysing facial expressions than the others and the feature selection procedure can be applied to improve the performance [3], [4]. Indeed, extracting complex 2D or 3D features can improve the systems performance, but often requires more computational resources.…”
Section: Introductionmentioning
confidence: 99%
“…Although some of the methods achieved good performance, their performance can be improved further if two types of features were used. [23] 83.6% Geometric Rabiu et al [13] 92.2% Geometric Soyel et al [18] 91.3% Geometric Xioli et al [9] 90.2% Geometric Tekguc et al [19] 88.1% Geometric T.Yun [25] 85.39% Texture Lemaire et al [8] 78.13% Texture Yurtkan et al [26] Fig. 3.…”
Section: A Bu-3dfe Database 1) Datamentioning
confidence: 99%
“…Rabiu et al [13] has used the geometric data to obtain 16 feature distances based on the FACS principle, along with 27 Angles using maximum relevance minimum redundancy (mRMR) to reduce the features and then a Support Vector Machine (SVM) for classification. Yurtkan et al [26] recently propose a feature selection procedure for improved facial expression recognition utilizing 3-Dimensional (3D) geometrical facial feature point positions. It classifies expressions in six basic emotional categories which are anger, disgust, fear, happiness, sadness and surprise.…”
Section: Introductionmentioning
confidence: 99%
“…Bu bildiride daha önceki çalı malarımızda önerdi imiz yüz öznitelik noktalarının entropi de erlerine ba lı özgün öznitelik seçim yönteminden [1,2] ilham alınmı tır. Bu bildiride farklı olarak yüz temsili 3B yüz öznitelik uzaklıklarıyla yapılmı tır.…”
unclassified