2021
DOI: 10.3390/app112311171
|View full text |Cite
|
Sign up to set email alerts
|

Viewpoint Robustness of Automated Facial Action Unit Detection Systems

Abstract: Automatic facial action detection is important, but no previous studies have evaluated pre-trained models on the accuracy of facial action detection as the angle of the face changes from frontal to profile. Using static facial images obtained at various angles (0°, 15°, 30°, and 45°), we investigated the performance of three automated facial action detection systems (FaceReader, OpenFace, and Py-feat). The overall performance was best for OpenFace, followed by FaceReader and Py-Feat. The performance of FaceRea… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7

Relationship

3
4

Authors

Journals

citations
Cited by 7 publications
(6 citation statements)
references
References 50 publications
0
6
0
Order By: Relevance
“…FaceReader 9.0 also provided evidence that the side of the face with the electrode attachment for ZM (participant's left face, which was the right side in the video recordings) demonstrated lower sensitivity, positive predictive value, and F1 score than the side without the electrode. This showed that in real-life applications, the automated FACS would not be robust against the presence of facial accessories or scars in addition to the effect of ever-changing viewpoints [24]. FaceReader, which could evaluate bilateral AUs separately, seems more useful against partial visual blockade.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…FaceReader 9.0 also provided evidence that the side of the face with the electrode attachment for ZM (participant's left face, which was the right side in the video recordings) demonstrated lower sensitivity, positive predictive value, and F1 score than the side without the electrode. This showed that in real-life applications, the automated FACS would not be robust against the presence of facial accessories or scars in addition to the effect of ever-changing viewpoints [24]. FaceReader, which could evaluate bilateral AUs separately, seems more useful against partial visual blockade.…”
Section: Discussionmentioning
confidence: 99%
“…An XGBoost classifier, Feat-XGB [67], was used as the AU detector, which used PCA-reduced HOG features for AU predictions, as with OpenFace [54]. It was trained for 20 AUs (1, 2, 4, 5, 6, 7, 9, 10, 12, 14, 15, 17, 18, 20, 23, 24, 25, 26, 28, 43) using BP4D [57], BP4D+ [68], DISFA [55], DISFA+ [69], CK+ [70], JAFFE [71], Shoulder Pain [58], and EmotioNet [72] and validated using WIDER FACE [51], 300W [73], NAMBA [24], and BIWI-Kinect [74]. The average F1 score was 0.54 (AU4 = 0.64 and AU12 = 0.83) [42].…”
Section: Py-featmentioning
confidence: 99%
See 1 more Smart Citation
“…The Facial Action Coding System considers AUs as having the ability to describe all facial movements anatomically [28]. While OpenFace does not guarantee the same performance that manual facial coding does, there was sufficient biserial correlation (r = .80) between OpenFace and expert FACS coders' performances to static frontal facial images of Japanese persons [43]. OpenFace can detect 18 AUs: 1 (inner brow raiser), 2 (outer brow raiser), 4 (brow lowerer), 5 (upper lid raiser), 6 (cheek raiser), 7 (lid tightener), 9 (nose wrinkler), 10 (upper lip raiser), 12 (lip corner puller), 14 (dimpler), 15 (lip corner depressor), 17 (chin raiser), 20 (lip stretcher), 23 (lip tightener), 25 (lips parts), 26 (jaw drop), 28 (lip suck), and 45 (blink).…”
Section: Discussionmentioning
confidence: 99%
“…Previous studies found that OpenFace performed AU detection at higher-than-chance levels, for posed datasets as well as videos collected "in the wild" from YouTube (Namba et al, 2021a). However, at least for still images, the accuracy drops when the face is angled at 45 • , but still higher than chance (Namba et al, 2021b). Examples of the four Action Units are provided in Figure 2.…”
Section: Eyebrow Movement Analysis-openface 20mentioning
confidence: 93%