2019 14th IEEE International Conference on Automatic Face &Amp; Gesture Recognition (FG 2019) 2019
DOI: 10.1109/fg.2019.8756543
|View full text |Cite
|
Sign up to set email alerts
|

Cross-domain AU Detection: Domains, Learning Approaches, and Measures

Abstract: Facial action unit (AU) detectors have performed well when trained and tested within the same domain. Do AU detectors transfer to new domains in which they have not been trained? To answer this question, we review literature on cross-domain transfer and conduct experiments to address limitations of prior research. We evaluate both deep and shallow approaches to AU detection (CNN and SVM, respectively) in two large, well-annotated, publicly available databases, Expanded BP4D+ and GFT. The databases differ in ob… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
27
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 33 publications
(27 citation statements)
references
References 55 publications
0
27
0
Order By: Relevance
“…More generally, generalizability of models and decision thresholds across databases or domains are open research questions. Decreases in classifier performance are common in cross-domain settings (Onal Ertugrul et al, 2019a) even when models are trained on large databases. Future work should explore cross-domain generalizability of models and thresholds in large databases that vary in pose characteristics.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…More generally, generalizability of models and decision thresholds across databases or domains are open research questions. Decreases in classifier performance are common in cross-domain settings (Onal Ertugrul et al, 2019a) even when models are trained on large databases. Future work should explore cross-domain generalizability of models and thresholds in large databases that vary in pose characteristics.…”
Section: Discussionmentioning
confidence: 99%
“…Traditional AU detection methods are based on (i) extracting appearance (Jiang et al, 2011;Eleftheriadis et al, 2015;Baltrusaitis et al, 2018) or geometric features (Lucey et al, 2007;Du et al, 2014) from the whole face and (ii) obtaining shallow representations as histograms of these features, thus ignoring the specificity of facial parts to AUs (Shojaeilangari et al, 2015). Deep approaches using whole face to train CNNs (Hammal et al, 2017;Onal Ertugrul et al, 2019a) also ignore the specificity of facial parts. More recent approaches focus on obtaining local representations using patch learning.…”
Section: Patch Learningmentioning
confidence: 99%
“…Because manual FACS coding has the disadvantage of being time consuming, automatic detection of FACS AUs has been an active area of research [ 5 ]. Automated facial AU detection systems are available as both commercial tools (e.g., Affectiva, FaceReader) and open-source tools (e.g., OpenFace, [ 6 , 7 ] and Automated Facial Affect Recognition (AFAR) [ 8 , 9 ]). One study found that OpenFace and AFAR generally performed similarly, but average results were slightly better for AFAR [ 5 ].…”
Section: Introductionmentioning
confidence: 99%
“…Second, there has been no systematic comparison of AU detection accuracy among systems. FaceReader is a commercial software designed to analyze facial expressions, whereas OpenFace [ 6 , 7 ] is the dominant shareware automatic facial computing system for many applied situations [ 17 , 18 ], and AFAR is an open-source, state-of-the-art, algorithm-based user-friendly tool for automated AU detection [ 8 , 9 ]. Although comparisons of the performance of these systems are interesting for newcomers and important in terms of system selection, to our best knowledge, no studies have compared these three tools as of yet.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation