In this work, we present DECAF-a multimodal dataset for decoding user physiological responses to affective multimedia content. Different from datasets such as DEAP [15] and MAHNOB-HCI [31], DECAF contains (1) Brain signals acquired using the Magnetoencephalogram (MEG) sensor, which requires little physical contact with the user's scalp and consequently facilitates naturalistic affective response, and (2) Explicit and implicit emotional responses of 30 participants to 40 one-minute music video segments used in [15] and 36 movie clips, thereby enabling comparisons between the EEG vs MEG modalities as well as movie vs music stimuli for affect recognition. In addition to MEG data, DECAF comprises synchronously recorded near-infra-red (NIR) facial videos, horizontal Electrooculogram (hEOG), Electrocardiogram (ECG), and trapeziusElectromyogram (tEMG) peripheral physiological responses. To demonstrate DECAF's utility, we present (i) a detailed analysis of the correlations between participants' self-assessments and their physiological responses and (ii) single-trial classification results for valence, arousal and dominance, with performance evaluation against existing datasets. DECAF also contains timecontinuous emotion annotations for movie clips from seven users, which we use to demonstrate dynamic emotion prediction.
Abstract-This paper presents a new multimodal database and the associated results for characterization of affect (valence, arousal and dominance) using the Magnetoencephalogram (MEG) brain signals and peripheral physiological signals (horizontal EOG, ECG, trapezius EMG). We attempt single-trial classification of affect in movie and music video clips employing emotional responses extracted from eighteen participants. The main findings of this study are that: (i) the MEG signal effectively encodes affective viewer responses, (ii) clip arousal is better predicted by MEG, while peripheral physiological signals are more effective for predicting valence and (iii) prediction performance is better for movie clips as compared to music video clips.
This paper presents a method for inferring the Positive and Negative Affect Schedule (PANAS) and the BigFive personality traits of 35 participants through the analysis of their implicit responses to 16 emotional videos. The employed modalities to record the implicit responses are (i) EEG, (ii) peripheral physiological signals (ECG, GSR), and (iii) facial landmark trajectories. The predictions of personality traits/PANAS are done using linear regression models that are trained independently on each modality. The main findings of this study are that: (i) PANAS and personality traits of individuals can be predicted based on the users' implicit responses to affective video content, (ii) ECG+GSR signals yield 70%±8% F1-score on the distinction between extroverts/introverts, (iii) EEG signals yield 69%±6% F1-score on the distinction between creative/non creative people, and finally (iv) for the prediction of agreeableness, emotional stability, and baseline affective states we achieved significantly higher than chance-level results.
Abstract-This paper presents characterization of affect (valence and arousal) using the Magnetoencephalogram (MEG) brain signal. We attempt single-trial classification of movie and music videos with MEG responses extracted from seven participants. The main findings of this study are that: (i) the MEG signal effectively encodes affective viewer responses, (ii) clip arousal is better predicted than valence employing MEG and (iii) prediction performance is better for movie clips as compared to music videos.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.