2017 12th IEEE International Conference on Automatic Face &Amp; Gesture Recognition (FG 2017) 2017
DOI: 10.1109/fg.2017.86
|View full text |Cite
|
Sign up to set email alerts
|

Fusing Multilabel Deep Networks for Facial Action Unit Detection

Abstract: The automatic detection of the activation of facial muscles, i.e. the detection of the so called facial Action Units (AUs), has received significant attention due to the application of facial expression analysis/recognition in areas such as affect recognition or behavior analysis. However, the recognition of subtle expressions is a challenging task that requires a multimodal approach where several sources of information are used. In this paper, we follow such an approach and propose a novel Deep Learning archi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
49
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
3
2
1

Relationship

2
4

Authors

Journals

citations
Cited by 20 publications
(49 citation statements)
references
References 27 publications
(64 reference statements)
0
49
0
Order By: Relevance
“…In Table 1, we also show the performance of Static [18] on the testing splits -we observe that our method obtains better results in the 8 expressions. This considerable difference in performance is mainly due to two reasons.…”
Section: Classification Of Facial Expressionsmentioning
confidence: 90%
See 4 more Smart Citations
“…In Table 1, we also show the performance of Static [18] on the testing splits -we observe that our method obtains better results in the 8 expressions. This considerable difference in performance is mainly due to two reasons.…”
Section: Classification Of Facial Expressionsmentioning
confidence: 90%
“…detection of the activation of certain facial muscles, is recently being treated as a pattern recognition problem, where one trains in a supervised manner classifiers that receive as input an image, or features extracted from it, and give at the output a set of binary labels, as many as the AUs that the method detects. In recent years the low-level feature extraction and the classifiers are replaced by Convolutional Neural Networks (CNNs) [18], [35], [36] since they have been shown to learn better and more general appearance features compared to hand-crafted ones [37]. Furthermore, networks trained on large datasets on surrogate tasks (e.g.…”
Section: Low-level Feature Extractionmentioning
confidence: 99%
See 3 more Smart Citations