2018
DOI: 10.1101/458380
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Using Computer-vision and Machine Learning to Automate Facial Coding of Positive and Negative Affect Intensity

Abstract: Facial expressions are fundamental to interpersonal communication, including social interaction, and allow people of different ages, cultures, and languages to quickly and reliably convey emotional information. Historically, facial expression research has followed from discrete emotion theories, which posit a limited number of distinct affective states that are represented with specific patterns of facial action. Much less work has focused on dimensional features of emotion, particularly positive and negative … Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 55 publications
(67 reference statements)
0
1
0
Order By: Relevance
“…Currently, as the field of artificial intelligence and machine learning is evolving rapidly, attempts are being made to classify images of persons and their facial expressions into distinct categories of the respective underlying emotions [18,19,20,21,22,23,24] or to objectively measure the severity of experienced pain [25,26,27,28] or affect [29]. Until now, most research has relied on simple two-dimensional images as input data for training the artificial neural networks and consecutive evaluation [22].…”
Section: Automated Facial Expressionmentioning
confidence: 99%
“…Currently, as the field of artificial intelligence and machine learning is evolving rapidly, attempts are being made to classify images of persons and their facial expressions into distinct categories of the respective underlying emotions [18,19,20,21,22,23,24] or to objectively measure the severity of experienced pain [25,26,27,28] or affect [29]. Until now, most research has relied on simple two-dimensional images as input data for training the artificial neural networks and consecutive evaluation [22].…”
Section: Automated Facial Expressionmentioning
confidence: 99%