In recent years, with the continuous development of artificial intelligence and brain-computer interface technology, emotion recognition based on physiological signals, especially electroencephalogram signals, has become a popular research topic and attracted wide attention. However, the imbalance of the data sets themselves, affective features' extraction from electroencephalogram signals, and the design of classifiers with excellent performance, pose a great challenge to the subject. Motivated by the outstanding performance of deep learning approaches in pattern recognition tasks, we propose a method based on convolutional neural network with data augmentation method Borderline-synthetic minority oversampling technique. First, we obtain 32-channel electroencephalogram signals from DEAP data set, which is the standard data set of emotion recognition. Then, after data pre-processing, we extract features in frequency domain and data augmentation based on the data augmentation algorithm above for getting more balanced data. Finally, we train a one dimensional convolutional neural network for three classification on two emotional dimensions valence and arousal. Meanwhile, the proposed method is compared with some traditional machine learning methods and some existing methods by other researchers, which is proved to be effective in emotion recognition, and the average accuracy rate of 32 subjects on valence and arousal are 97.47% and 97.76% respectively. Compared with other existing methods, the performance of the proposed method with data augmentation algorithm Borderline-SMOTE shows its advantage in affective emotional recognition than that without Borderline-SMOTE.
In recent years, with the continuous development of artificial intelligence and brain-computer interface technology, emotion recognition based on physiological signals, especially, electroencephalogram (EEG) signals, has become a popular research topic and attracted wide attention. However, how to extract effective features from EEG signals and accurately recognize them by classifiers have also become an increasingly important task. Therefore, in this paper, we propose an emotion recognition method of EEG signals based on the ensemble learning method, AdaBoost. First, we consider the time domain, time-frequency domain, and nonlinear features related to emotion, extract them from the preprocessed EEG signals, and fuse the features into an eigenvector matrix. Then, the linear discriminant analysis feature selection method is used to reduce the dimensionality of the features. Next, we use the optimized feature sets and train a classifier based on the ensemble learning method, AdaBoost, for binary classification. Finally, the proposed method has been tested in the DEAP data set on four emotional dimensions: valence, arousal, dominance, and liking. The proposed method is proved to be effective in emotion recognition, and the best average accuracy rate can reach up to 88.70% on the dominance dimension. Compared with other existing methods, the performance of the proposed method is significantly improved.
With the continuous development of deep learning, the performance of the intelligent diagnosis system for ocular fundus diseases has been significantly improved, but during the system training process, problems like lack of fundus samples and uneven sample distribution (the number of disease samples is much smaller than the number of normal samples) have become increasingly prominent. In view of the previous issues, this paper proposes a method for generating fundus images based on “Combined GAN” (Com-GAN), which can generate both normal fundus images and fundus images with hard exudates, so that the sample distribution can be more even, while the fundus data are expanded. First, this paper uses existing images to train a Com-GAN, which consists of two subnetworks: im-WGAN and im-CGAN; then, it uses the trained model to generate fundus images, then performs qualitative and quantitative evaluation on the generated images, and adds the images to the original image set to expand the datasets; finally, based on this expanded training set, it trains the hard exudate detection system. The expanded datasets effectively improve the generalization ability of the system on the public datasets DIARETDB1 and e-ophtha EX, thereby verifying the effectiveness of the proposed method.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.