2018
DOI: 10.1109/taffc.2016.2625250
|View full text |Cite
|
Sign up to set email alerts
|

ASCERTAIN: Emotion and Personality Recognition Using Commercial Sensors

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
248
0
1

Year Published

2019
2019
2023
2023

Publication Types

Select...
7
1
1
1

Relationship

1
9

Authors

Journals

citations
Cited by 345 publications
(251 citation statements)
references
References 48 publications
1
248
0
1
Order By: Relevance
“…The only similar work that uses the 16PF methodology is presented in [23]. However, it only focuses on analyzing a set of still images using a Convolutional Neural Network (CNN) and CNN features, while our research uses the FACS methodology for studying the face, as FACS is better at predicting hidden emotions [24][25][26], hence will provide more accuracy and reliability to the personality prediction task knowing there is a close relationship between personality traits and how emotions are expressed [27,28]. FACS [29] also offers an in-depth analysis of the facial muscle activity by studying micro expressions and, as we use video recordings of subjects' frontal face and not still images, our paper shows significant prediction accuracy improvement compared to [23] which we detail in the next sections.…”
Section: Introductionmentioning
confidence: 99%
“…The only similar work that uses the 16PF methodology is presented in [23]. However, it only focuses on analyzing a set of still images using a Convolutional Neural Network (CNN) and CNN features, while our research uses the FACS methodology for studying the face, as FACS is better at predicting hidden emotions [24][25][26], hence will provide more accuracy and reliability to the personality prediction task knowing there is a close relationship between personality traits and how emotions are expressed [27,28]. FACS [29] also offers an in-depth analysis of the facial muscle activity by studying micro expressions and, as we use video recordings of subjects' frontal face and not still images, our paper shows significant prediction accuracy improvement compared to [23] which we detail in the next sections.…”
Section: Introductionmentioning
confidence: 99%
“…Content-centric approaches [17], [18] predict the likely elicited emotions by examining image, audio and videobased emotion correlates [17], [23], [25]. In contrast, usercentric AR methods [14]- [16] estimate the stimulus-evoked emotion based on physiological changes observed in viewers (content consumers). Physiological signals indicative of emotions include pupillary dilation [26], eye-gaze patterns [9], [27] and neural activity [14], [15], [28].…”
Section: Affect Recognitionmentioning
confidence: 99%
“…Big-five personality scales and affective self-ratings of 58 users together with their EEG, ECG, GSR, and facial activity data were included in the ASCERTAIN dataset [123] . The number of videos used as the stimulus is 36 and the length of each video clip is between 51 and 128 seconds.…”
Section: Datasets Consisting Of Videos Onlymentioning
confidence: 99%