As an important paradigm of spontaneous brain-computer interfaces (BCIs), motor imagery (MI) has been widely used in the fields of neurological rehabilitation and robot control. Recently, researchers have proposed various methods for feature extraction and classification based on MI signals. The decoding model based on deep neural networks (DNNs) has attracted significant attention in the field of MI signal processing. Due to the strict requirements for subjects and experimental environments, it is difficult to collect large-scale and high-quality electroencephalogram (EEG) data. However, the performance of a deep learning model depends directly on the size of the datasets. Therefore, the decoding of MI-EEG signals based on a DNN has proven highly challenging in practice. Based on this, we investigated the performance of different data augmentation (DA) methods for the classification of MI data using a DNN. First, we transformed the time series signals into spectrogram images using a short-time Fourier transform (STFT). Then, we evaluated and compared the performance of different DA methods for this spectrogram data. Next, we developed a convolutional neural network (CNN) to classify the MI signals and compared the classification performance of after DA. The Fréchet inception distance (FID) was used to evaluate the quality of the generated data (GD) and the classification accuracy, and mean kappa values were used to explore the best CNN-DA method. In addition, analysis of variance (ANOVA) and paired t-tests were used to assess the significance of the results. The results showed that the deep convolutional generative adversarial network (DCGAN) provided better augmentation performance than traditional DA methods: geometric transformation (GT), autoencoder (AE), and variational autoencoder (VAE) (p < 0.01). Public datasets of the BCI competition IV (datasets 1 and 2b) were used to verify the classification performance. Improvements in the classification accuracies of 17% and 21% (p < 0.01) were observed after DA for the two datasets. In addition, the hybrid network CNN-DCGAN outperformed the other classification methods, with average kappa values of 0.564 and 0.677 for the two datasets.
Early detection and intervention of cerebral palsy can promote neural remodeling in the process of brain development, thus reducing the negative effects of cerebral palsy. In this paper, we proposed a novel method for early prediction of infant cerebral palsy based on General Movements Assessment (GMA) theory with RGB-D videos. Firstly, we explored the human pose recognition in supine position based on RGB-D videos. Then we further apply it to auto-GMA. Specifically, we employ current pose estimation method on RGB images to achieve the infant full body 2D key points. By combining the depth information, the 3D movement of infant in supine position can be obtained. Then the infant's movement complexity index is achieved by extracting the infant's whole-body movement characteristic. In order to verify the effectiveness of the method, we did some experiments on a public dataset consisting 12 real recorded infants' movement RGB-D videos, with 4 of the samples were diagnosed as abnormal infants by a GMA expert. We use expert GMA ratings of these recorded movements as the gold standard. Our method achieved state-of-the-art with sensitivity of 100%, specificity of 87.5%, and accuracy of 91.7%. The results show that the method has great potential in assisting doctors in diagnose infant cerebral palsy.
The steady-state motion visual evoked potential (SSMVEP) collected from the scalp suffers from strong noise and is contaminated by artifacts such as the electrooculogram (EOG) and the electromyogram (EMG). Spatial filtering methods can fuse the information of different brain regions, which is beneficial for the enhancement of the active components of the SSMVEP. Traditional spatial filtering methods fuse electroencephalogram (EEG) in the time domain. Based on the idea of image fusion, this study proposed an SSMVEP enhancement method based on time-frequency (T-F) image fusion. The purpose is to fuse SSMVEP in the T-F domain and improve the enhancement effect of the traditional spatial filtering method on SSMVEP active components. Firstly, two electrode signals were transformed from the time domain to the T-F domain via short-time Fourier transform (STFT). The transformed T-F signals can be regarded as T-F images. Then, two T-F images were decomposed via two-dimensional multiscale wavelet decomposition, and both the high-frequency coefficients and low-frequency coefficients of the wavelet were fused by specific fusion rules. The two images were fused into one image via two-dimensional wavelet reconstruction. The fused image was subjected to mean filtering, and finally, the fused time-domain signal was obtained by inverse STFT (ISTFT). The experimental results show that the proposed method has better enhancement effect on SSMVEP active components than the traditional spatial filtering methods. This study indicates that it is feasible to fuse SSMVEP in the T-F domain, which provides a new idea for SSMVEP analysis.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.