In this paper, we present a method to improve emotion recognition based on the fusion of local cortical activations and dynamic functional network patterns. We estimate the cortical activations using power spectral density (PSD) with the Burg autoregressive model. On the other hand, we estimate the functional connectivity networks by utilizing the phase locking value (PLV). The results of cortical activations and connectivity networks show different patterns across three emotions at all frequency bands. Similarly, the results of fusion significantly improve the classification rate in terms of accuracy, sensitivity, specificity and the area under the receiver operator characteristics curve (AROC), p < 0.05. The average improvement with fusion in all evaluation metrics are 6.84% and 4.1% when compared to PSD and PLV alone, respectively. The results clearly demonstrate the advantage of fusion of cortical activations with dynamic functional networks for developing human-computer interaction system in real-world applications.
We present here one of the first studies that attempt to differentiate between genuine and acted emotional expressions, using EEG data. We present the first EEG dataset (available here) with recordings of subjects with genuine and fake emotional expressions. We build our experimental paradigm for classification of smiles; genuine smiles, fake/acted smiles and neutral expression. We propose multiple methods to extract intrinsic features from three EEG emotional expressions; genuine, neutral, and fake/acted smile. We extracted EEG features using three time-frequency analysis methods: discrete wavelet transforms (DWT), empirical mode decomposition (EMD), and incorporating DWT into EMD (DWT-EMD) at three frequency bands. We then evaluated the proposed methods using several classifiers including, k-nearest neighbors (KNN), support vector machine (SVM), and artificial neural network (ANN). We carried out an experimental paradigm on 28-subjects underwent three types of emotional expressions, genuine, neutral and fake/acted. The results showed that incorporating DWT into EMD extracted more hidden features than sole DWT or sole EMD method. The power spectral feature extracted by DWT, EMD, and DWT-EMD showed different neural patterns across the three emotional expressions at all the frequency bands. We performed binary classification experiments and achieved acceptable accuracy reaching a maximum of 84% in all type of emotions, classifiers and bands using sole DWT or EMD. Meanwhile, a combination of DWT-EMD achieved the highest classification accuracy with ANN in classifying true emotional expressions from fake expressions in the alpha and beta bands with an average accuracy of 94.3% and 84.1%, respectively. Our results suggest combining DWT-EMD for future emotion studies and highlight the association of alpha and beta frequency bands with emotions.
In this paper, we present a method to quantify the coupling between brain regions under vigilance and enhanced mental states by utilizing partial directed coherence (PDC) and graph theory analysis (GTA). The vigilance state is induced using a modified version of stroop color-word task (SCWT) while the enhancement state is based on audio stimulation with a pure tone of 250 Hz. The audio stimulation was presented to the right and left ears simultaneously for one-hour while participants perform the SCWT. The quantification of mental states was performed by means of statistical analysis of indexes based on GTA, behavioral responses of time-on-task (TOT), and Brunel Mood Scale (BRMUS). The results show that PDC is very sensitive to vigilance decrement and shows that the brain connectivity network is significantly reduced with increasing TOT, p < 0.05. Meanwhile, during the enhanced state, the connectivity network maintains high connectivity as time passes and shows significant improvements compared to vigilance state. The audio stimulation enhances the connectivity network over the frontal and parietal regions and the right hemisphere. The increase in the connectivity network correlates with individual differences in the magnitude of the vigilance enhancement assessed by response time to stimuli. Our results provide evidence for enhancement of cognitive processing efficiency with audio stimulation. The BRMUS was used to evaluate the emotional states of vigilance task before and after using the audio stimulation. BRMUS factors, such as fatigue, depression, and anger, significantly decrease in the enhancement group compared to vigilance group. On the other hand, happy and calmness factors increased with audio stimulation, p < 0.05.
Our results show that visual stimuli can modulate cognitive workload, and the modulation can be measured by the single trial EEG/ERP detection method.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.