2017
DOI: 10.1007/978-3-319-71607-7_19
|View full text |Cite
|
Sign up to set email alerts
|

Deep Convolutional Neural Network for Facial Expression Recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
19
0

Year Published

2017
2017
2020
2020

Publication Types

Select...
3
1
1

Relationship

2
3

Authors

Journals

citations
Cited by 10 publications
(19 citation statements)
references
References 18 publications
0
19
0
Order By: Relevance
“…Even though the image is of low resolution and the label of the relatively large dataset is noisy, this approach is effective. The work closely related to ours is [9], which proposed to employ a peak expression image (easy sample) to help the training of a network with input from a weak expression image (hard sample). This is also achieved by a regression loss between the intermediate feature maps.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Even though the image is of low resolution and the label of the relatively large dataset is noisy, this approach is effective. The work closely related to ours is [9], which proposed to employ a peak expression image (easy sample) to help the training of a network with input from a weak expression image (hard sample). This is also achieved by a regression loss between the intermediate feature maps.…”
Section: Related Workmentioning
confidence: 99%
“…Motivated by this observation, several previous works [8], [9] on expression recognition utilize face recognition datasets to pre-train the network, which is then fine-tuned on the expression dataset. The large amount of labeled face data [4], [10], makes it possible to train a fairly complicated and deep network.…”
Section: Introductionmentioning
confidence: 99%
“…Among all the compared databases, our proposed IDEnNet outperforms the state-of-the-art methods including handcraft-based methods (LBP-TOP [28], and HOG 3D [9]), video-based methods (MSR [19], AdaLBP [27], Atlases [5], STM-ExpLet [13], and DTAGN [7]), and CNN-based methods (3D-CNN [12], 3D-CNN-DAP [12], DTAGN [7], PPDN [29], GCNet [8], and FN2EN [4]).…”
Section: Resultsmentioning
confidence: 92%
“…Average Accuracy (%) HOG 3D [9] 70.63 AdaLBP [27] 73.54 STM-ExpLet [13] 74.59 Atlases [5] 75.52 DTAGN [7] 81.46 PPDN [29] 84.59 GCNet [8] 86.39 FN2EN [4] 87…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation