Convolutional neural networks (CNNs) are commonly employed for image emotion recognition owing to their ability to extract local features; however, they have difficulty capturing the global representations of images. In contrast, self-attention modules in a visual transformer network can capture long-range dependencies as global features. Some studies have shown that an image's local and global features determine the emotions of the image and that some local regions can generate an emotional prioritization effect. Therefore, we proposed combining global self-attention features and a local multiscale features network (CGLF-Net) to recognize an image's emotion, extracting image features from global and local perspectives. Specifically, the crossscale transformer network is employed instead of convolution operations in the global feature branch to enhance its model feature representation. In the local feature branch, the improved feature pyramid module is applied to extract features from different sensory fields, thereby combining semantic information with different scales. Furthermore, the local attention module based on class activation maps guides the network to focus on locally salient regions. In addition, using multibranch loss functions, local and global feature branches are combined to enhance the ability to capture a comprehensive set of features. Consequently, the proposed network achieves recognition accuracies of 75.61% and 65.01% on the FI-8 benchmark dataset and Emotion-6 benchmark dataset, respectively. These results show that the proposed CGLF-Net reliably address the difficulty of extracting global features using CNNs, representing the classification performance of the state-of-the-art.
Macro-expressions are widely used in emotion recognition based on electroencephalography (EEG) because of their use as an intuitive external expression. Similarly, micro-expressions, as suppressed and brief emotional expressions, can also reflect a person’s genuine emotional state. Therefore, researchers have started to focus on emotion recognition studies based on micro-expressions and EEG. However, compared to the effect of artifacts generated by macro-expressions on the EEG signal, it is not clear how artifacts generated by micro-expressions affect EEG signals. In this study, we investigated the effects of facial muscle activity caused by micro-expressions in positive emotions on EEG signals. We recorded the participants’ facial expression images and EEG signals while they watched positive emotion-inducing videos. We then divided the 13 facial regions and extracted the main directional mean optical flow features as facial micro-expression image features, and the power spectral densities of theta, alpha, beta, and gamma frequency bands as EEG features. Multiple linear regression and Granger causality test analyses were used to determine the extent of the effect of facial muscle activity artifacts on EEG signals. The results showed that the average percentage of EEG signals affected by muscle artifacts caused by micro-expressions was 11.5%, with the frontal and temporal regions being significantly affected. After removing the artifacts from the EEG signal, the average percentage of the affected EEG signal dropped to 3.7%. To the best of our knowledge, this is the first study to investigate the affection of facial artifacts caused by micro-expressions on EEG signals.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.