To our knowledge, it has been widely studied in Screen-2D modality for the six basic emotions proposed by Professor Paul Ekman, but there are only studies on their positive and negative valence in VR-3D modality. In this study, we will investigate whether the six basic emotions have stronger brain activation states in VR-3D modality than in Screen-2D modality. We designed an emotion-inducing experiment with six basic emotions (happiness, surprise, sadness, fear, anger, and disgust) to record the electroencephalogram (EEG) signals during watching VR-3D and Screen-2D videos. The power spectral density (PSD) was calculated to compare the brain activation differences between VR-3D and Screen-2D modalities during the induction of the six basic emotions. The results of statistical analysis of the relative power differences between VR-3D and Screen-2D modalities for each emotion revealed that both happiness and surprise presented greater differences in the α and γ frequency bands, while sad, fear, disgust and anger all presented greater differences in the α and θ frequency bands, which are mainly observed in the frontal and occipital regions. On the other hand, the six emotions all yielded satisfactory classification accuracy (above 85%) by classification from a subset of power feature of the brain activation states in the same emotion between the two modalities. Overall, there are significant differences in the induction of same discrete emotions in VR-3D and Screen-2D modalities, with greater brain activation in VR-3D modalities. These findings provide a better understanding about the neural activity of discrete emotional tasks assessed in VR environments.
Convolutional neural networks (CNNs) are commonly employed for image emotion recognition owing to their ability to extract local features; however, they have difficulty capturing the global representations of images. In contrast, self-attention modules in a visual transformer network can capture long-range dependencies as global features. Some studies have shown that an image's local and global features determine the emotions of the image and that some local regions can generate an emotional prioritization effect. Therefore, we proposed combining global self-attention features and a local multiscale features network (CGLF-Net) to recognize an image's emotion, extracting image features from global and local perspectives. Specifically, the crossscale transformer network is employed instead of convolution operations in the global feature branch to enhance its model feature representation. In the local feature branch, the improved feature pyramid module is applied to extract features from different sensory fields, thereby combining semantic information with different scales. Furthermore, the local attention module based on class activation maps guides the network to focus on locally salient regions. In addition, using multibranch loss functions, local and global feature branches are combined to enhance the ability to capture a comprehensive set of features. Consequently, the proposed network achieves recognition accuracies of 75.61% and 65.01% on the FI-8 benchmark dataset and Emotion-6 benchmark dataset, respectively. These results show that the proposed CGLF-Net reliably address the difficulty of extracting global features using CNNs, representing the classification performance of the state-of-the-art.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.