2014
DOI: 10.1109/taffc.2014.2346515
|View full text |Cite
|
Sign up to set email alerts
|

Intra-Class Variation Reduction Using Training Expression Images for Sparse Representation Based Facial Expression Recognition

Abstract: Automatic facial expression recognition (FER) is becoming increasingly important in the area of affective computing systems because of its various emerging applications such as human-machine interface and human emotion analysis. Recently, sparse representation based FER has become popular and has shown an impressive performance. However, sparse representation could often produce less meaningful sparse solution for FER due to intra-class variation such as variation in identity or illumination. This paper propos… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
38
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
4
4
1

Relationship

1
8

Authors

Journals

citations
Cited by 112 publications
(38 citation statements)
references
References 44 publications
(126 reference statements)
0
38
0
Order By: Relevance
“…We believe that by assigning a single expression to a image can be ambiguous when there is transition between expressions or the given expression is not at its peak, and therefore the top-2 expression can result in a better classification performance when evaluating image sequences. [30], 84.4 [21], 88.5 [42], 92.0 [24] 92.4 [25], 93.6 [49] FER2013 66.4±0.6 81.7±0.3 69.3 [44] The proposed architecture was implemented using the Caffe toolbox [16] on a Tesla K40 GPU. It takes roughly 20 hours to train 175K samples for 200 epochs.…”
Section: Resultsmentioning
confidence: 99%
“…We believe that by assigning a single expression to a image can be ambiguous when there is transition between expressions or the given expression is not at its peak, and therefore the top-2 expression can result in a better classification performance when evaluating image sequences. [30], 84.4 [21], 88.5 [42], 92.0 [24] 92.4 [25], 93.6 [49] FER2013 66.4±0.6 81.7±0.3 69.3 [44] The proposed architecture was implemented using the Caffe toolbox [16] on a Tesla K40 GPU. It takes roughly 20 hours to train 175K samples for 200 epochs.…”
Section: Resultsmentioning
confidence: 99%
“…the cross-database task is a much more challenging task than the subjectindependent one and the recognition rates are considerably lower than the ones in the subject-independent case. Table II shows the recognition rate achieved on each database in [33], 84.4 [34], 88.5 [35], 92.0 [36], 92.4 [37], 93.6 [38] MMI 78.68 55.83 63.4 [37], 75.12 [39], 74.7 [36], 79.8 [35], 86.7 [2], 78.51 [40] FERA 66.66 49.64 56.1 [37], 55.6 [41] [3] the cross-database case and it also compares the results with other state-of-the-art methods. Like before, in "Inception-ResNet without CRF" column, the CRF module is replaced with a softmax in our proposed network.…”
Section: B Resultsmentioning
confidence: 99%
“…Table 2 presents the comparative recognition rates on BU-3DFE database. Lee et al method depends on sparse representation [12]. Zheng's method is using group sparse reduced rank regression using GSRRR and ALM [13].…”
Section: Simulation Resultsmentioning
confidence: 99%
“…Average Recognition Rate (%) Lee et al [12] 87.85 Zheng [13] 78.9 Analyzed LBP Based 82.33 VI. CONCLUSION In this paper, we analyzed the performance of Local Binary Patterns for face recognition.…”
Section: Methodsmentioning
confidence: 99%