2019
DOI: 10.3390/electronics8030324
|View full text |Cite
|
Sign up to set email alerts
|

Improved Facial Expression Recognition Based on DWT Feature for Deep CNN

Abstract: Facial expression recognition (FER) has become one of the most important fields of research in pattern recognition. In this paper, we propose a method for the identification of facial expressions of people through their emotions. Being robust against illumination changes, this method combines four steps: Viola–Jones face detection algorithm, facial image enhancement using contrast limited adaptive histogram equalization (CLAHE) algorithm, the discrete wavelet transform (DWT), and deep convolutional neural netw… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
27
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
4

Relationship

1
7

Authors

Journals

citations
Cited by 62 publications
(34 citation statements)
references
References 33 publications
0
27
0
Order By: Relevance
“…In [53], different levels of deep learning features extracted from the SIFT and CNN models were combined, and finally, the SVM was used to classify the mixed features. In [54], the Viola-jones method [13] was used to locate the face, and contrast limited adaptive histogram equalization (CLAHE) was used to enhance the face. Then, discrete wavelet transform (DWT) was used to extract the facial features, and finally, the extracted features are used to train the CNN network.…”
Section: The Fusion Of Traditional Methods and Convolutional Neuramentioning
confidence: 99%
See 2 more Smart Citations
“…In [53], different levels of deep learning features extracted from the SIFT and CNN models were combined, and finally, the SVM was used to classify the mixed features. In [54], the Viola-jones method [13] was used to locate the face, and contrast limited adaptive histogram equalization (CLAHE) was used to enhance the face. Then, discrete wavelet transform (DWT) was used to extract the facial features, and finally, the extracted features are used to train the CNN network.…”
Section: The Fusion Of Traditional Methods and Convolutional Neuramentioning
confidence: 99%
“…The comparison results of recognition accuracy on the Fer2013, CK+, FER+ and RAF data sets are listed in Table IV, Table V, Table VI and Table VII, respectively. [55] 68.79% DenseNet [25] 71.02% GoogLeNet [25] 65.76% VGG-Face [25] 69.18% Ref [51] 61.86% Ref [29] 65.03% Ref [30] 68% Ref [31] 66% Ref [34] 70.02% AlexNet [42] 66.67% VGGNet [42] 69.41% ResNet [42] 70.74% MBCC-CNN(ours) 71.52% [46] 97.02% Ref [53] 94.82% Ref [23] 90.48% Ref [28] 97.38% Ref [51] 97.35% Ref [33] 94.67% Ref [34] 98% WMCNN-LSTM [36] 97.50% Ref [35] 87.20% Ref [54] 96.46% Ref [37] 98.38% Ref [44] 96.15% Ref [47] 96.46% Ref [61] 96.28% MBCC-CNN(ours) 98.48% TableVI THE ACCURACY COMPARISON RESULTS ON THE FER+ DATA SET Method Accuracy DenseNet [62] 86.54% Ref [63] 85.67% Ref [64] 87.15% Ref [65] 85.10% Ref [66] 82.00% Ref [67] 84.29% Ref [68] 87.76% ResNet50 (transfer learning) [69] 79.90% MBCC-CNN(ours)…”
Section: F Comparison Of Recognition Accuracy With Other Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…This method is mainly adopted on clipping the histogram at a predefined value in order to limit contrast amplification before computing the CDF value this method is called the clip limit [16]. The method is fast, easy to implement, and fully automatic [17].…”
Section: Contrast Limited Adaptive Histogram Equalization (Clahe)mentioning
confidence: 99%
“…It is worth mentioning, finally, that polarity and sentiment detection were also exploited in face analysis [94][95][96][97][98][99], posture analysis [100], brain signals [101], and the behavioral analysis of groups of people [102]. These methodologies were tailored to specific tasks and strictly related to the video domain.…”
Section: Sentiment Analysis: Other Applicationsmentioning
confidence: 99%