2016
DOI: 10.1007/978-981-10-0557-2_44
|View full text |Cite
|
Sign up to set email alerts
|

Convolutional Neural Network Models for Facial Expression Recognition Using BU-3DFE Database

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
11
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 23 publications
(11 citation statements)
references
References 8 publications
0
11
0
Order By: Relevance
“…We note that despite the elaborate framework proposed in [38], our less complex pipeline based on modality fusion at latent representation level significantly outperforms their reported result; see Table 2. Furthermore, we observe that [39] reported a test accuracy of 92% based on the combination of 2D and 3D features trained on DCNNs from scratch. However, [39] did not follow any of the well known protocols for training and evaluation on the BU-3DFE dataset; that is, using a set of randomly selected 60 subjects that are either fixed or not in a 10-fold cross validation scheme for training and evaluation.…”
Section: Resultsmentioning
confidence: 89%
See 2 more Smart Citations
“…We note that despite the elaborate framework proposed in [38], our less complex pipeline based on modality fusion at latent representation level significantly outperforms their reported result; see Table 2. Furthermore, we observe that [39] reported a test accuracy of 92% based on the combination of 2D and 3D features trained on DCNNs from scratch. However, [39] did not follow any of the well known protocols for training and evaluation on the BU-3DFE dataset; that is, using a set of randomly selected 60 subjects that are either fixed or not in a 10-fold cross validation scheme for training and evaluation.…”
Section: Resultsmentioning
confidence: 89%
“…Furthermore, we observe that [39] reported a test accuracy of 92% based on the combination of 2D and 3D features trained on DCNNs from scratch. However, [39] did not follow any of the well known protocols for training and evaluation on the BU-3DFE dataset; that is, using a set of randomly selected 60 subjects that are either fixed or not in a 10-fold cross validation scheme for training and evaluation. Instead, they consider the whole 100 subjects for training and evaluation, using 90 subjects for training and the remaining 10 subjects for testing.…”
Section: Resultsmentioning
confidence: 89%
See 1 more Smart Citation
“…Features are usually calculated on the region surrounding principal facial landmarks or on the mouth and eyes that inherently contain essential information for emotion recognition. These key features that are considered closely related to expression categories, in order to perform FER are fed to various classifiers, as well as Support-Vector Machines (SVM) [47][48][49][50][51], Adaboost, k-Nearest Neighbors (k-NN), Linear Discriminant Analysis (LDA), Modified Principal Component Analysis (PCA), Hidden Markov Model (HMM) [44][45][46], Random Forest [52] or Neural Networks [51,53,54].…”
Section: Feature-based Vs Model-based Algorithmsmentioning
confidence: 99%
“…3D FER has become an extensive field of research with many early attempts in [3], [4], [5], [6], [7] and most recent works in [8], [9], [10] that trend to use both 2D and 3D multi-modal data to further improve the accuracy. Huynh et al [11] proposed to use deep CNNs for classifying the six basic facial expressions. Two CNNs are trained on the BU-3DFE database based on the 2D facial appearance and the the 3D face shape, respectively.…”
Section: Related Workmentioning
confidence: 99%