Feature extraction and classification play an important role in brain-computer interface (BCI) systems. In traditional approaches, methods in pattern recognition field are adopted to solve these problems. Nowadays, the deep learning theory has developed so fast that researchers have employed it in many areas like computer vision and speech recognition, which have achieved remarkable results. However, few people introduce the deep learning method into the study of biomedical signals, especially EEG signals. In this paper, a wavelet transform-based input, which combines the time-frequency images of C3, Cz, and C4 channels, is proposed to extract the feature of motor imagery EEG signal. Then, a 2-Layer convolutional neural network is built as the classifier and convolutional kernels of different sizes are validated. The performance obtained by the proposed approach is evaluated by accuracy and Kappa value. The accuracy on dataset III from BCI competition II reaches 90%, and the best Kappa value on dataset 2a from competition IV is greater than many of other methods. In addition, the proposed method utilizes a resized small input, which reduces calculation complexity, so the training period is relatively faster. The results show that the method using convolutional neural network can be comparable or better than the other state-of-the-art approaches, and the performance will be improved when there is sufficient data.INDEX TERMS Brain computer interface (BCI), motor imagery (MI), wavelet transform time-frequency image, convolutional neural network (CNN).
Stroke is a leading cause of disability worldwide. In this paper, a novel robot-assisted rehabilitation system based on motor imagery electroencephalography (EEG) is developed for regular training of neurological rehabilitation for upper limb stroke patients. Firstly, three-dimensional animation was used to guide the patient image the upper limb movement and EEG signals were acquired by EEG amplifier. Secondly, eigenvectors were extracted by harmonic wavelet transform (HWT) and linear discriminant analysis (LDA) classifier was utilized to classify the pattern of the left and right upper limb motor imagery EEG signals. Finally, PC triggered the upper limb rehabilitation robot to perform motor therapy and gave the virtual feedback. Using this robot-assisted upper limb rehabilitation system, the patient's EEG of upper limb movement imagination is translated to control rehabilitation robot directly. Consequently, the proposed rehabilitation system can fully explore the patient's motivation and attention and directly facilitate upper limb post-stroke rehabilitation therapy. Experimental results on unimpaired participants were presented to demonstrate the feasibility of the rehabilitation system. Combining robot-assisted training with motor imagery-based BCI will make future rehabilitation therapy more effective. Clinical testing is still required for further proving this assumption.
The motor imagery (MI) paradigm has been wildly used in brain-computer interface (BCI), but the difficulties in performing imagery tasks limit its application. Mechanical vibration stimulus has been increasingly used to enhance the MI performance, but its improvement consistence is still under debate. To develop more effective vibration stimulus methods for consistently enhancing MI, this study proposes an EEG phase-dependent closed-loop mechanical vibration stimulation method. The subject’s index finger of the non-dominant hand was given 4 different vibration stimulation conditions (i.e., continuous open-loop vibration stimulus, two different phase-dependent closed-loop vibration stimuli and no stimulus) when performing two tasks of imagining movement and rest of the index finger from his/her dominant hand. We compared MI performance and brain oscillatory patterns under different conditions to verify the effectiveness of this method. The subjects performed 80 trials of each type in a random order, and the average phase-lock value of closed-loop stimulus conditions was 0.71. It was found that the closed-loop vibration stimulus applied in the falling phase helped the subjects to produce stronger event-related desynchronization (ERD) and sustain longer. Moreover, the classification accuracy was improved by about 9% compared with MI without any vibration stimulation (p = 0.012, paired t-test). This method helps to modulate the mu rhythm and make subjects more concentrated on the imagery and without negative enhancement during rest tasks, ultimately improves MI-based BCI performance. Participants reported that the tactile fatigue under closed-loop stimulation conditions was significantly less than continuous stimulation. This novel method is an improvement to the traditional vibration stimulation enhancement research and helps to make stimulation more precise and efficient.
Recent developments in the non-muscular human-robot interface (HRI) and shared control strategies have shown potential for controlling the assistive robotic arm by people with no residual movement or muscular activity in upper limbs. However, most non-muscular HRIs only produce discrete-valued commands, resulting in non-intuitive and less effective control of the dexterous assistive robotic arm. Furthermore, the user commands and the robot autonomy commands usually switch in the shared control strategies of such applications. This characteristic has been found to yield a reduced sense of agency as well as frustration for the user according to previous user studies. In this study, we firstly propose an intuitive and easy-to-learn-and-use hybrid HRI by combing the Brain-machine interface (BMI) and the gaze-tracking interface. For the proposed hybrid gaze-BMI, the continuous modulation of the movement speed via the motor intention occurs seamlessly and simultaneously to the unconstrained movement direction control with the gaze signals. We then propose a shared control paradigm that always combines user input and the autonomy with the dynamic combination regulation. The proposed hybrid gaze-BMI and shared control paradigm were validated for a robotic arm reaching task performed with healthy subjects. All the users were able to employ the hybrid gaze-BMI for moving the end-effector sequentially to reach the target across the horizontal plane while also avoiding collisions with obstacles. The shared control paradigm maintained as much volitional control as possible, while providing the assistance for the most difficult parts of the task. The presented semi-autonomous robotic system yielded continuous, smooth, and collision-free motion trajectories for the end effector approaching the target. Compared to a system without assistances from robot autonomy, it significantly reduces the rate of failure as well as the time and effort spent by the user to complete the tasks.
The estimation of the grip force and the 3D push-pull force (push and pull force in the three dimension space) from the electromyogram (EMG) signal is of great importance in the dexterous control of the EMG prosthetic hand. In this paper, an action force estimation method which is based on the eight channels of the surface EMG (sEMG) and the Generalized Regression Neural Network (GRNN) is proposed to meet the requirements of the force control of the intelligent EMG prosthetic hand. Firstly, the experimental platform, the acquisition of the sEMG, the feature extraction of the sEMG and the construction of GRNN are described. Then, the multi-channels of the sEMG when the hand is moving are captured by the EMG sensors attached on eight different positions of the arm skin surface. Meanwhile, a grip force sensor and a three dimension force sensor are adopted to measure the output force of the human's hand. The characteristic matrix of the sEMG and the force signals are used to construct the GRNN. The mean absolute value and the root mean square of the estimation errors, the correlation coefficients between the actual force and the estimated force are employed to assess the accuracy of the estimation. Analysis of variance (ANOVA) is also employed to test the difference of the force estimation. The experiments are implemented to verify the effectiveness of the proposed estimation method and the results show that the output force of the human's hand can be correctly estimated by using sEMG and GRNN method.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.