Mental stress is one of the serious factors that lead to many health problems. Scientists and physicians have developed various tools to assess the level of mental stress in its early stages. Several neuroimaging tools have been proposed in the literature to assess mental stress in the workplace. Electroencephalogram (EEG) signal is one important candidate because it contains rich information about mental states and condition. In this paper, we review the existing EEG signal analysis methods on the assessment of mental stress. The review highlights the critical differences between the research findings and argues that variations of the data analysis methods contribute to several contradictory results. The variations in results could be due to various factors including lack of standardized protocol, the brain region of interest, stressor type, experiment duration, proper EEG processing, feature extraction mechanism, and type of classifier. Therefore, the significant part related to mental stress recognition is choosing the most appropriate features. In particular, a complex and diverse range of EEG features, including time-varying, functional, and dynamic brain connections, requires integration of various methods to understand their associations with mental stress. Accordingly, the review suggests fusing the cortical activations with the connectivity network measures and deep learning approaches to improve the accuracy of mental stress level assessment.
Expression recognition from non-frontal faces is a challenging research area with growing interest. This paper works with a generic sparse coding feature, inspired from object recognition, for multi-view facial expression recognition. Our extensive experiments on face images with seven pan angles and five tilt angles, rendered from the BU-3DFE database, achieve state-of-the-art results. We achieve a recognition rate of 69.1% on all images with four expression intensity levels, and a recognition performance of 76.1% on images with the strongest expression intensity. We then also present detailed analysis of the variations in expression recognition performance for various pose changes.
In this paper, we present a method to improve emotion recognition based on the fusion of local cortical activations and dynamic functional network patterns. We estimate the cortical activations using power spectral density (PSD) with the Burg autoregressive model. On the other hand, we estimate the functional connectivity networks by utilizing the phase locking value (PLV). The results of cortical activations and connectivity networks show different patterns across three emotions at all frequency bands. Similarly, the results of fusion significantly improve the classification rate in terms of accuracy, sensitivity, specificity and the area under the receiver operator characteristics curve (AROC), p < 0.05. The average improvement with fusion in all evaluation metrics are 6.84% and 4.1% when compared to PSD and PLV alone, respectively. The results clearly demonstrate the advantage of fusion of cortical activations with dynamic functional networks for developing human-computer interaction system in real-world applications.
Wireless Sensor Networks (WSNs) achieve much attention from various domains because of its easy maintenance, self-configuration, and scalability characteristics. It is comprised of small-sized sensors that interact with the Internet of Things (IoT) for observing and recording the physical conditions. The sensor nodes are autonomous and construct inter-communication topology with each other in an ad-hoc manner. However, the main restrictions of sensor nodes are their finite resources for energy management, data storage, transmission, and processing power. Different solutions have been addressed by researchers to overcome network performance due to bounded limitations of such battery-powered nodes, however, equalize the energy consumption and maintain the network throughput are the main research problems. Furthermore, due to the compromised nodes, the data is more prone to security vulnerabilities. Therefore, their security over the unpredictable network is other research concerns. Thus, the aim of this research article to propose a secure and energy-aware heuristic-based routing (SEHR) protocol for WSN to detect and prevent compromising data with efficient performance. Firstly, the proposed protocol makes use of an artificial intelligence-based heuristic analysis to accomplish a reliable, and intellectual learning scheme. Secondly, it protects the transmissions against adversary groups to attain security with the least complexity. Moreover, the route maintenance strategy is also achieved by using traffic exploration to reduce link failures and network dis-connectivity. The simulation results demonstrated the SEHR protocol improves the efficacy for network throughput by an average of 18%, packet drop ratio by 42%, end-to-end delay by 26%, energy consumption by 36%, faulty routes by 38%, network overhead by 44%, and computational overhead by 43% in dynamic scenarios as compared to existing work.
In this paper, we present a method to quantify the coupling between brain regions under vigilance and enhanced mental states by utilizing partial directed coherence (PDC) and graph theory analysis (GTA). The vigilance state is induced using a modified version of stroop color-word task (SCWT) while the enhancement state is based on audio stimulation with a pure tone of 250 Hz. The audio stimulation was presented to the right and left ears simultaneously for one-hour while participants perform the SCWT. The quantification of mental states was performed by means of statistical analysis of indexes based on GTA, behavioral responses of time-on-task (TOT), and Brunel Mood Scale (BRMUS). The results show that PDC is very sensitive to vigilance decrement and shows that the brain connectivity network is significantly reduced with increasing TOT, p < 0.05. Meanwhile, during the enhanced state, the connectivity network maintains high connectivity as time passes and shows significant improvements compared to vigilance state. The audio stimulation enhances the connectivity network over the frontal and parietal regions and the right hemisphere. The increase in the connectivity network correlates with individual differences in the magnitude of the vigilance enhancement assessed by response time to stimuli. Our results provide evidence for enhancement of cognitive processing efficiency with audio stimulation. The BRMUS was used to evaluate the emotional states of vigilance task before and after using the audio stimulation. BRMUS factors, such as fatigue, depression, and anger, significantly decrease in the enhancement group compared to vigilance group. On the other hand, happy and calmness factors increased with audio stimulation, p < 0.05.
This work details the authors' efforts to push the baseline of expression recognition performance on a realistic database. Both subject-dependent and subject-independent emotion recognition scenarios are addressed in this work. These two happen frequently in real life settings. The approach towards solving this problem involves face detection, followed by key point identification, then feature generation and then finally classification. An ensemble of features comprising of Hierarchial Gaussianization (HG), Scale Invariant Feature Transform (SIFT) and Optic Flow have been incorporated. In the classification stage we used SVMs. The classification task has been divided into person specific and person independent emotion recognition. Both manual labels and automatic algorithms for person verification have been attempted. They both give similar performance.
This paper demonstrates, to our best knowledge, the first attempt on gender and ethnicity identification from silhouetted face profiles using a computer vision technique. The results achieved, after testing on 441 images, show that silhouetted face profiles have a lot of information, in particular, for ethnicity identification. Shape context based matching [1] was employed for classification. The test samples were multi-ethnic. Average accuracy for gender was 71.20% and for ethnicity 71.66%. However, the accuracy was significantly higher for some classes, such as 83.41% for females (in case of gender identification) and 80.37% for East and South East Asians (in case of ethnicity identification).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.