2018
DOI: 10.1109/taffc.2016.2593719
|View full text |Cite
|
Sign up to set email alerts
|

Facial Expression Recognition in Video with Multiple Feature Fusion

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
76
0
1

Year Published

2018
2018
2024
2024

Publication Types

Select...
8
2

Relationship

0
10

Authors

Journals

citations
Cited by 184 publications
(85 citation statements)
references
References 42 publications
2
76
0
1
Order By: Relevance
“…The overall system accuracy of 47.44% is comparable with similar state-of-the-art work of J. Chen, conducted on the extended Cohn-Kanade (CK+) database (clean environment) and the Acted Facial Expression in Wild (AFEW) 4.0 database [42], with an overall average accuracy of 45.2% and between 0.6% for clean and 5% for "wild" videos for the accuracy achievement, and using a new proposed histogram of oriented gradients from three orthogonal planes (HOG-TOP) compared to LBP features used in this study for real-world videos (close to wild). The objective of this study was to appraise the efficiency of regression methods for automatic emotional state detection and analysis on embedded devices to be made on human emotion recognition from live cameras.…”
Section: Conclusion and Discussionsupporting
confidence: 71%
“…The overall system accuracy of 47.44% is comparable with similar state-of-the-art work of J. Chen, conducted on the extended Cohn-Kanade (CK+) database (clean environment) and the Acted Facial Expression in Wild (AFEW) 4.0 database [42], with an overall average accuracy of 45.2% and between 0.6% for clean and 5% for "wild" videos for the accuracy achievement, and using a new proposed histogram of oriented gradients from three orthogonal planes (HOG-TOP) compared to LBP features used in this study for real-world videos (close to wild). The objective of this study was to appraise the efficiency of regression methods for automatic emotional state detection and analysis on embedded devices to be made on human emotion recognition from live cameras.…”
Section: Conclusion and Discussionsupporting
confidence: 71%
“…It is important to extract the features from the faces for the global alignment kernel, as the appropriate feature can better measure the distances of faces. The existing approaches on multiple feature fusion [43], [44] show using multiple features can use the benefit of different features and obtain the better performance than using sole feature. Additionally, the existing group-level emotion recognition databases were collected from the Internet and suffer from the noise caused by poor illumination, head pose and bad image quality.…”
Section: Svm Based On Combined Global Alignment Kernelmentioning
confidence: 99%
“…Step 1: The Gini index is used to measure the segmentation effect of a feature in a decision tree In this paper, to train the RF algorithm, we extract 106 active patches form each micro-expression and calculate the joint histogram integrated by the optical flow feature and LBP-TOP operator of each patch. Ultimately, they form the 106-feature histogram vectors as shown in Equation (19).…”
Section: Nmp Extraction From the Active Patchesmentioning
confidence: 99%