2017
DOI: 10.20944/preprints201701.0120.v1
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Active AU Based Patch Weighting for Facial Expression Recognition

Abstract: Abstract:Facial expression has lots of applications in human-computer interaction. Although feature extraction and selection have been well studied, the specificity of each expression variation is not fully explored in state-of-the-art works. In this work, the problem of multiclass expression recognition is converted into triplet-wise expression recognition. For each expression triplet, a new feature optimization model based on Action Unit (AU) weighting and patch weight optimization is proposed to represent t… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
6
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(7 citation statements)
references
References 33 publications
(50 reference statements)
1
6
0
Order By: Relevance
“…Our experiments are produced on JAFFE, MMI, CASIA, CK+, and CK + 7 databases using two different cross-validation schemes. The obtained results show that our fusing method outperformed the use of the Table 7 Comparison to state-of-the-art methods on CK + 6 database using subject-independent cross-validation Article Method Accuracy 2017 [33] DLP-CNN 95,78 2015 [40] lp-norm MKL multiclass-SVM 95.50 2009 [13] Boosted-LBP 95.10 LBP uniform 92.60 2016 [11] deep NN architecture 93.20 2013 [45] two-stage classification of (LBP + shape) 89.20 our method PCA-fusion 95.97 Table 8 Comparison to state-of-the-art methods on CK + 7 database using subject-independent cross-validation Article Method Accuracy 2017 [61] Boosting-POOF 95.70 2017 [36] IACNN 95.37 2017 [32] IL-CNN 94.35 2017 [62] triplet-wise-based of GSF 94.09 2015 [35] AU-inspired deep networks (AUDN GSL = 2) 93.70 2015 [40] lp-norm MKL multiclass-SVM 93.60 2013 [63] AU traditional fusing and the existing works of different state-of-the-art approaches.…”
Section: Resultsmentioning
confidence: 99%
“…Our experiments are produced on JAFFE, MMI, CASIA, CK+, and CK + 7 databases using two different cross-validation schemes. The obtained results show that our fusing method outperformed the use of the Table 7 Comparison to state-of-the-art methods on CK + 6 database using subject-independent cross-validation Article Method Accuracy 2017 [33] DLP-CNN 95,78 2015 [40] lp-norm MKL multiclass-SVM 95.50 2009 [13] Boosted-LBP 95.10 LBP uniform 92.60 2016 [11] deep NN architecture 93.20 2013 [45] two-stage classification of (LBP + shape) 89.20 our method PCA-fusion 95.97 Table 8 Comparison to state-of-the-art methods on CK + 7 database using subject-independent cross-validation Article Method Accuracy 2017 [61] Boosting-POOF 95.70 2017 [36] IACNN 95.37 2017 [32] IL-CNN 94.35 2017 [62] triplet-wise-based of GSF 94.09 2015 [35] AU-inspired deep networks (AUDN GSL = 2) 93.70 2015 [40] lp-norm MKL multiclass-SVM 93.60 2013 [63] AU traditional fusing and the existing works of different state-of-the-art approaches.…”
Section: Resultsmentioning
confidence: 99%
“…Humans have the ability to quickly filter out irrelevant information and lock in parts of interest when recognizing objects. Recently, this kind of attention mechanism has been successfully applied in FER [ 17 , 18 , 20 , 21 , 24 , 25 , 26 , 29 , 35 , 43 , 57 ]. Zhong et al [ 21 ] divided a facial image into nonoverlapped patches to discover the common and specific patches that are important to discriminate all the expressions and only a particular expression, respectively; then, they discussed how different numbers of patches affect recognition performance.…”
Section: Related Workmentioning
confidence: 99%
“…Facial expressions can be divided into six basic emotions, namely, anger (An); disgust (Di); fear (Fe); happiness (Ha); sadness (Sa); surprise (Su); and one neutral (Ne) emotion [ 9 ], contempt (Co), was subsequently added as one of the basic emotions [ 10 ]. Recognition of these emotions can be categorized into image-based [ 11 , 12 , 13 , 14 , 15 , 16 , 17 , 18 , 19 , 20 , 21 , 22 , 23 , 24 , 25 , 26 , 27 , 28 , 29 , 30 , 31 , 32 , 33 , 34 , 35 , 36 , 37 ] and video-based [ 38 , 39 , 40 , 41 , 42 , 43 ] approaches. Image-based approaches only use information about the static input image to determine the category of facial expression; on the other hand, except when the spatial features extracted from a static image are available, video-based approaches can also use temporal information of a dynamic image sequence to capture the temporal changes of facial appearance when some facial expression occurs.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…The author introduced a novel approach to discover the specificity of the expression variation in the face (Xie et al 2017 ). The specificity of the expression was signified through the triplet wise expression recognition in accordance with Action Unit (AU) and patch weight optimization.…”
Section: Related Workmentioning
confidence: 99%