2017
DOI: 10.3758/s13428-017-0996-1
|View full text |Cite
|
Sign up to set email alerts
|

Facial expression analysis with AFFDEX and FACET: A validation study

Abstract: The goal of this study was to validate AFFDEX and FACET, two algorithms classifying emotions from facial expressions, in iMotions's software suite. In Study 1, pictures of standardized emotional facial expressions from three databases, the Warsaw Set of Emotional Facial Expression Pictures (WSEFEP), the Amsterdam Dynamic Facial Expression Set (ADFES), and the Radboud Faces Database (RaFD), were classified with both modules. Accuracy (Matching Scores) was computed to assess and compare the classification qualit… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

12
164
0
3

Year Published

2018
2018
2024
2024

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 196 publications
(179 citation statements)
references
References 67 publications
12
164
0
3
Order By: Relevance
“…We established the main effect of dataset type on recognition accuracy F(2, 7317) = 113.92, p < .001, η 2 p = .033, with AFEW containing the least detectable emotions (M = 0.40, 95% CI [0.38, 0.42], p < .001), and RAVDESS containing the most detectable emotions M = 0.57, 95% CI [0.56, 0.59], p < .001. Consistent with the results of other studies [3], accuracy scores for emotion labels were higher for acted facial expressions (RAVDESS M=0.57, 95% CI [0.56, 0.59]; SAVEE M = 0.54, 95% CI [0.52, 0.56])in comparison with more challenging 'in the wild' expressions (AFEW M =0.40, 95% CI [0.38, 0.42], p < .001). Significant interaction was revealed between algorithm type and dataset type F(6, 7317) = 7.45, p < .001, η 2 p = .007.…”
Section: Datasetsupporting
confidence: 93%
See 1 more Smart Citation
“…We established the main effect of dataset type on recognition accuracy F(2, 7317) = 113.92, p < .001, η 2 p = .033, with AFEW containing the least detectable emotions (M = 0.40, 95% CI [0.38, 0.42], p < .001), and RAVDESS containing the most detectable emotions M = 0.57, 95% CI [0.56, 0.59], p < .001. Consistent with the results of other studies [3], accuracy scores for emotion labels were higher for acted facial expressions (RAVDESS M=0.57, 95% CI [0.56, 0.59]; SAVEE M = 0.54, 95% CI [0.52, 0.56])in comparison with more challenging 'in the wild' expressions (AFEW M =0.40, 95% CI [0.38, 0.42], p < .001). Significant interaction was revealed between algorithm type and dataset type F(6, 7317) = 7.45, p < .001, η 2 p = .007.…”
Section: Datasetsupporting
confidence: 93%
“…We tested the performance of four commercial algorithms on the datasets with different level of control over conditions of data acquisition. As different algorithms use distinct statistical methods and datasets to train the machine learning procedures, they differently classify emotions [3]. If algorithms are capable to recognize what people actually express (in other words, technologies work), then different algorithms would give consistent predictions per video recording, using particular emotion labels.…”
mentioning
confidence: 99%
“…We then analyzed each of the 4,648 recordings with FACET (23). FACET is a computer-vision tool that automatically detects 20 FACS-based AUs (see Supplementary Table 1 for descriptions and depictions of FACET-detected AUs).…”
Section: Methodsmentioning
confidence: 99%
“…Instead, researchers tend to rely on measures of emotional responding that are not observable in social interactions (e.g., heart rate variability). Recently, automated computer-vision and machine learning (CVML) based approaches have emerged that make it possible to scale AU annotation to larger numbers of participants (e.g., 21-23) thus making follow-up studies more feasible. In fact, inter-disciplinary applications of CVML have allowed researchers to automatically identify pain severity (e.g., 24), depressive states (e.g., 25), and discrete emotions from facial expressions (e.g., 26).…”
Section: Using Computer-vision and Machine Learning To Automate Faciamentioning
confidence: 99%
“…Some users' faces captured during the main experiment are shown in Figure 2, and it is possible to see some different reactions along the simulated flights. The efficiency of the Face Reader software is shown in several researches and publications, being used as a reference regarding to emotion detection from facial expressions on several contexts and applications [29][30][31].…”
Section: Facial Emotion Sensingmentioning
confidence: 99%