2018
DOI: 10.1007/s11760-018-1388-4
|View full text |Cite
|
Sign up to set email alerts
|

Fusing multi-stream deep neural networks for facial expression recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 16 publications
(8 citation statements)
references
References 26 publications
0
6
0
Order By: Relevance
“…These hand-designed features rely on a large amount of expert experience, have good interpretability, and can be used in various tasks in the field of computer vision. 14 For example, Gabor wavelet can capture the edge information of images at different scales and directions, and has certain robustness to image rotation, deformation, and illumination changes; Local Binary Pattern (LBP) has rotation invariance and gray invariance to images; Histogram of Oriented Gradient has good invariance to the geometric and optical deformation of the image . 15 Reference 16 uses partitioning technology for feature extraction, combined with Hidden Markov Model (HMM) extension to classify facial expression features.…”
Section: Related Researchmentioning
confidence: 99%
“…These hand-designed features rely on a large amount of expert experience, have good interpretability, and can be used in various tasks in the field of computer vision. 14 For example, Gabor wavelet can capture the edge information of images at different scales and directions, and has certain robustness to image rotation, deformation, and illumination changes; Local Binary Pattern (LBP) has rotation invariance and gray invariance to images; Histogram of Oriented Gradient has good invariance to the geometric and optical deformation of the image . 15 Reference 16 uses partitioning technology for feature extraction, combined with Hidden Markov Model (HMM) extension to classify facial expression features.…”
Section: Related Researchmentioning
confidence: 99%
“…They exploited both spatial and temporal dimensions to achieve competitive recognition accuracies on benchmark datasets. Siddiqi et al [25] presented an offline FER architecture that used stepwise linear discriminant analysis complemented with hidden conditional random fields model. The former extracted relevant features from input expression images with the help of partial F-test values to curtail intraclass differences and inflate inter-class variation.…”
Section: Related Workmentioning
confidence: 99%
“…The latter, adept to approximate complex distributions with Gaussian density functions, was used for classification of the extracted features. Salmam et al [25] presented a hybrid model that coupled a CNN representing appearance-based features such as wrinkles and skin folds and a deep neural network based on geometric features characterizing salient facial parts such as eyes, nose, and mouth. They demonstrated the increase in efficiency of FER by integrating both types of features.…”
Section: Related Workmentioning
confidence: 99%
“…Happy and Routray [18] integrated both the features in a model that used local binary patterns for analysing texture information extracted from salient facial patches. Salmam et al [19] combined a CNN architecture representing appearance features and a Deep Neural Network (DNN) framework based on geometric features to demonstrate that integrating features increased the efficiency of FER method.…”
Section: Facial Geometry-or Appearance-based Featuresmentioning
confidence: 99%
“…[46] 92. 5 Meena et al (2020) [47] 92.9 Wei et al (2020) [39] 94.4 Makhmudkhujaev et al (2019) [48] 94.5 Cheng and Zhou (2020) [49] 96.0 Chen and Hu (2020) [50] 96.3 Gan et al (2020) [38] 96.3 De la torre et al (2015) [51] 96.4 Qin et al(2020) [40] 96.8 Dyn-HOG (with multi-class SVM) 96.8 Salmam et al (2019) [19] 96.9 Li et al (2020) [52] 97. 4 Zhao et al (2018) [32] with optical flow 97.5 FlowCorr (with mult-iclass SVM) 98.0 Sadeghi and Raie (2019) [53] 98.2 Table 10 encapsulates percentage accuracy comparison for each emotion of the presented descriptors with a perceptual study of human interpretation of basic emotions conducted by Calvo et al [9] on KDEF-dyn dataset.…”
Section: Recognition Performance Of Dyn-hog Descriptormentioning
confidence: 99%