In order to improve the accuracy of emotional recognition by end-to-end automatic learning of emotional features in spatial and temporal dimensions of electroencephalogram (EEG), an EEG emotional feature learning and classification method using deep convolution neural network (CNN) was proposed based on temporal features, frequential features, and their combinations of EEG signals in DEAP dataset. The shallow machine learning models including bagging tree (BT), support vector machine (SVM), linear discriminant analysis (LDA), and Bayesian linear discriminant analysis (BLDA) models and deep CNN models were used to make emotional binary classification experiments on DEAP datasets in valence and arousal dimensions. The experimental results showed that the deep CNN models which require no feature engineering achieved the best recognition performance on temporal and frequency combined features in both valence and arousal dimensions, which is 3.58% higher than the performance of the best traditional BT classifier in valence dimension and 3.29% higher than that of BT classifier in arousal dimension.INDEX TERMS EEG, emotion recognition, convolution neural network, combined features, deep learning.
In this paper, we propose a hierarchical bidirectional Gated Recurrent Unit (GRU) network with attention for human emotion classification from continues electroencephalogram (EEG) signals. The structure of the model mirrors the hierarchical structure of EEG signals, and the attention mechanism is used at two levels of EEG samples and epochs. By paying different levels of attention to content with different importance, the model can learn more significant feature representation of EEG sequence which highlights the contribution of important samples and epochs to its emotional categories. We conduct the cross-subject emotion classification experiments on DEAP data set to evaluate the model performance. The experimental results show that in valence and arousal dimensions, our model on 1-s segmented EEG sequences outperforms the best deep baseline LSTM model by 4.2% and 4.6%, and outperforms the best shallow baseline model by 11.7% and 12% respectively. Moreover, with increase of the epoch's length of EEG sequences, our model shows more robust classification performance than baseline models, which demonstrates that the proposed model can effectively reduce the impact of long-term non-stationarity of EEG sequences and improve the accuracy and robustness of EEG-based emotion classification.
Macroscale assemblies of well-aligned carbon nanotubes (CNTs) can inherit intrinsic properties from individual CNTs and at the same time ease handling difficulties that occur at nanometer scale when dealing with individual CNTs. Herein, simple fabrication processes are introduced to produce a variety of macroscale CNT assemblies, including well-aligned CNT bundles, CNT films, and CNT fibers, from the same starting material: spinnable CNT arrays. The electrical and mechanical properties of the as-prepared CNT assemblies have been investigated and compared. It is found that CNT films show an electrical conductivity of 145~250 S cm−1which is comparable to CNT fibers, but two orders magnitude higher than that of conventional Bucky paper. CNT fibers exhibit diameter dependent tensile strength which is mainly attributed to the nonuniform twisting along the radial direction of fibers.
As for the acoustic radiation calculation of vibrating structures in the free field, less attention has been paid to the time domain analysis than frequency domain analysis. Nevertheless time domain sound calculation is essential for applications in which the dynamic process should be carefully addressed. Previous researchers tried hard to improve the efficiency of transient acoustic radiation calculation and have made many progresses. However, until now the transient acoustic radiation in the free field still suffers from insufficient computational resources and low efficiency when the number of discrete elements is large and the temporal sample sequence is long. In order to solve these problems, a modal expansion and spatial delay based transient sound field calculation method is proposed. By constructing the DMATM (Delayed Modal Acoustic Transfer Matrix) in the proposed method, the physical coordinates of structural nodes are transferred to lowdimensional modal coordinates and the spatial delay information of different nodes is preserved. To describe this method in detail, the acoustic radiation process of the impact sound synthesis of a cylinder is investigated. The results show that the proposed method is more efficient than previous methods under the same accuracy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.