Accurate recognition and understating of human emotions is an essential skill that can improve the collaboration between humans and machines. In this vein, electroencephalogram (EEG)-based emotion recognition is considered an active research field with challenging issues regarding the analyses of the nonstationary EEG signals and the extraction of salient features that can be used to achieve accurate emotion recognition. In this paper, an EEG-based emotion recognition approach with a novel time-frequency feature extraction technique is presented. In particular, a quadratic time-frequency distribution (QTFD) is employed to construct a high resolution time-frequency representation of the EEG signals and capture the spectral variations of the EEG signals over time. To reduce the dimensionality of the constructed QTFD-based representation, a set of 13 time- and frequency-domain features is extended to the joint time-frequency-domain and employed to quantify the QTFD-based time-frequency representation of the EEG signals. Moreover, to describe different emotion classes, we have utilized the 2D arousal-valence plane to develop four emotion labeling schemes of the EEG signals, such that each emotion labeling scheme defines a set of emotion classes. The extracted time-frequency features are used to construct a set of subject-specific support vector machine classifiers to classify the EEG signals of each subject into the different emotion classes that are defined using each of the four emotion labeling schemes. The performance of the proposed approach is evaluated using a publicly available EEG dataset, namely the DEAPdataset. Moreover, we design three performance evaluation analyses, namely the channel-based analysis, feature-based analysis and neutral class exclusion analysis, to quantify the effects of utilizing different groups of EEG channels that cover various regions in the brain, reducing the dimensionality of the extracted time-frequency features and excluding the EEG signals that correspond to the neutral class, on the capability of the proposed approach to discriminate between different emotion classes. The results reported in the current study demonstrate the efficacy of the proposed QTFD-based approach in recognizing different emotion classes. In particular, the average classification accuracies obtained in differentiating between the various emotion classes defined using each of the four emotion labeling schemes are within the range of 73.8%–86.2%. Moreover, the emotion classification accuracies achieved by our proposed approach are higher than the results reported in several existing state-of-the-art EEG-based emotion recognition studies.
This paper presents an EEG-based brain-computer interface system for classifying eleven motor imagery (MI) tasks within the same hand. The proposed system utilizes the Choi-Williams time-frequency distribution (CWD) to construct a time-frequency representation (TFR) of the EEG signals. The constructed TFR is used to extract five categories of time-frequency features (TFFs). The TFFs are processed using a hierarchical classification model to identify the MI task encapsulated within the EEG signals. To evaluate the performance of the proposed approach, EEG data were recorded for eighteen intact subjects and four amputated subjects while imagining to perform each of the eleven hand MI tasks. Two performance evaluation analyses, namely channel- and TFF-based analyses, are conducted to identify the best subset of EEG channels and the TFFs category, respectively, that enable the highest classification accuracy between the MI tasks. In each evaluation analysis, the hierarchical classification model is trained using two training procedures, namely subject-dependent and subject-independent procedures. These two training procedures quantify the capability of the proposed approach to capture both intra- and inter-personal variations in the EEG signals for different MI tasks within the same hand. The results demonstrate the efficacy of the approach for classifying the MI tasks within the same hand. In particular, the classification accuracies obtained for the intact and amputated subjects are as high as 88.8% and 90.2%, respectively, for the subject-dependent training procedure, and 80.8% and 87.8%, respectively, for the subject-independent training procedure. These results suggest the feasibility of applying the proposed approach to control dexterous prosthetic hands, which can be of great benefit for individuals suffering from hand amputations.
This study aims to increase the control's dimensions of the electroencephalography (EEG)-based brain-computer interface (BCI) systems by distinguishing between the motor imagery (MI) tasks associated with fine body-parts of the same hand, such as the wrist and fingers. This in turn can enable individuals who are suffering from transradial amputations to better control prosthetic hands and to perform various dexterous hand tasks. In particular, we present a novel three-stage framework for decoding MI tasks of the same hand. The three stages of the proposed framework are the input, feature extraction, and classification stages. At the input stage, we employ a quadratic time-frequency distribution (QTFD) to analyze the EEG signals in the joint time-frequency domain. The use of a QTFD enables to transform the EEG signals into a set of two-dimensional (2D) time-frequency images (TFIs) that describe the distribution of the energy encapsulated within the EEG signals in terms of the time, frequency, and electrode position. At the feature extraction stage, we design a new convolutional neural network (CNN) architecture that can automatically analyze and extract salient features from the TFIs created at the input stage. Finally, the features obtained at the feature extraction stage are passed to the classification stage to assign each input TFI to one of the eleven MI tasks that are considered in the current study. The performance of our proposed framework is evaluated using EEG signals that were acquired from eighteen able-bodied subjects and four transradial amputated subjects while performing eleven MI tasks within the same hand. The average classification accuracies obtained for the able-bodied and transradial amputated subjects are 73.7% and 72.8%, respectively. Moreover, our proposed framework yields 14.5% and 11.2% improvements over the results obtained for the able-bodied and transradial amputated subjects, respectively, using conventional QTFD-based handcrafted features and a multi-class support vector machine classifier. The results demonstrate the efficacy of the proposed framework to decode the MI tasks associated with the same hand for able-bodied and transradial amputated subjects. INDEX TERMS Convolutional neural networks (CNN), deep learning, electroencephalography (EEG), motor imagery, time-frequency distribution.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.