The task of discriminating the motor imagery of different movements within the same limb using electroencephalography (EEG) signals is challenging because these imaginary movements have close spatial representations on the motor cortex area. There is, however, a pressing need to succeed in this task. The reason is that the ability to classify different same-limb imaginary movements could increase the number of control dimensions of a brain-computer interface (BCI). In this paper, we propose a 3-class BCI system that discriminates EEG signals corresponding to rest, imaginary grasp movements, and imaginary elbow movements. Besides, the differences between simple motor imagery and goal-oriented motor imagery in terms of their topographical distributions and classification accuracies are also being investigated. To the best of our knowledge, both problems have not been explored in the literature. Based on the EEG data recorded from 12 able-bodied individuals, we have demonstrated that same-limb motor imagery classification is possible. For the binary classification of imaginary grasp and elbow (goal-oriented) movements, the average accuracy achieved is 66.9%. For the 3-class problem of discriminating rest against imaginary grasp and elbow movements, the average classification accuracy achieved is 60.7%, which is greater than the random classification accuracy of 33.3%. Our results also show that goal-oriented imaginary elbow movements lead to a better classification performance compared to simple imaginary elbow movements. This proposed BCI system could potentially be used in controlling a robotic rehabilitation system, which can assist stroke patients in performing task-specific exercises.
Brain-computer interface (BCI) allows collaboration between humans and machines. It translates the electrical activity of the brain to understandable commands to operate a machine or a device. In this study, we propose a method to improve the accuracy of a 3-class BCI using electroencephalographic (EEG) signals. This BCI discriminates rest against imaginary grasps and elbow movements of the same limb. This classification task is challenging because imaginary movements within the same limb have close spatial representations on the motor cortex area. The proposed method extracts time-domain features and classifies them using a support vector machine (SVM) with a radial basis kernel function (RBF). An average accuracy of 74.2% was obtained when using the proposed method on a dataset collected, prior to this study, from 12 healthy individuals. This accuracy was higher than that obtained when other widely used methods, such as common spatial patterns (CSP), filter bank CSP (FBCSP), and band power methods, were used on the same dataset. These results are encouraging and the proposed method could potentially be used in future applications including BCI-driven robotic devices, such as a portable exoskeleton for the arm, to assist individuals with impaired upper extremity functions in performing daily tasks.
BackgroundA novel artefact removal algorithm is proposed for a self-paced hybrid brain-computer interface (BCI) system. This hybrid system combines a self-paced BCI with an eye-tracker to operate a virtual keyboard. To select a letter, the user must gaze at the target for at least a specific period of time (dwell time) and then activate the BCI by performing a mental task. Unfortunately, electroencephalogram (EEG) signals are often contaminated with artefacts. Artefacts change the quality of EEG signals and subsequently degrade the BCI’s performance.MethodsTo remove artefacts in EEG signals, the proposed algorithm uses the stationary wavelet transform combined with a new adaptive thresholding mechanism. To evaluate the performance of the proposed algorithm and other artefact handling/removal methods, semi-simulated EEG signals (i.e., real EEG signals mixed with simulated artefacts) and real EEG signals obtained from seven participants are used. For real EEG signals, the hybrid BCI system’s performance is evaluated in an online-like manner, i.e., using the continuous data from the last session as in a real-time environment.ResultsWith semi-simulated EEG signals, we show that the proposed algorithm achieves lower signal distortion in both time and frequency domains. With real EEG signals, we demonstrate that for dwell time of 0.0s, the number of false-positives/minute is 2 and the true positive rate (TPR) achieved by the proposed algorithm is 44.7%, which is more than 15.0% higher compared to other state-of-the-art artefact handling methods. As dwell time increases to 1.0s, the TPR increases to 73.1%.ConclusionsThe proposed artefact removal algorithm greatly improves the BCI’s performance. It also has the following advantages: a) it does not require additional electrooculogram/electromyogram channels, long data segments or a large number of EEG channels, b) it allows real-time processing, and c) it reduces signal distortion.
Traditional, hospital-based stroke rehabilitation can be labor-intensive and expensive. Furthermore, outcomes from rehabilitation are inconsistent across individuals and recovery is hard to predict. Given these uncertainties, numerous technological approaches have been tested in an effort to improve rehabilitation outcomes and reduce the cost of stroke rehabilitation. These techniques include brain–computer interface (BCI), robotic exoskeletons, functional electrical stimulation (FES), and proprioceptive feedback. However, to the best of our knowledge, no studies have combined all these approaches into a rehabilitation platform that facilitates goal-directed motor movements. Therefore, in this paper, we combined all these technologies to test the feasibility of using a BCI-driven exoskeleton with FES (robotic training device) to facilitate motor task completion among individuals with stroke. The robotic training device operated to assist a pre-defined goal-directed motor task. Because it is hard to predict who can utilize this type of technology, we considered whether the ability to adapt skilled movements with proprioceptive feedback would predict who could learn to control a BCI-driven robotic device. To accomplish this aim, we developed a motor task that requires proprioception for completion to assess motor-proprioception ability. Next, we tested the feasibility of robotic training system in individuals with chronic stroke (n = 9) and found that the training device was well tolerated by all the participants. Ability on the motor-proprioception task did not predict the time to completion of the BCI-driven task. Both participants who could accurately target (n = 6) and those who could not (n = 3), were able to learn to control the BCI device, with each BCI trial lasting on average 2.47 min. Our results showed that the participants’ ability to use proprioception to control motor output did not affect their ability to use the BCI-driven exoskeleton with FES. Based on our preliminary results, we show that our robotic training device has potential for use as therapy for a broad range of individuals with stroke.
Facilitating independent living of individuals with upper extremity impairment is a compelling goal for our society. The degree of disability of these individuals could potentially be reduced by using robotic devices that assist their movements in activities of daily living. One approach to control such robotic systems is the use of a brain–computer interface, which detects the user’s intention. This study proposes a method for estimating the user’s intention using electroencephalographic (EEG) signals. The proposed method is capable of discriminating rest from various imagined arm movements, including grasping and elbow flexion. The features extracted from EEG signals are autoregressive model coefficients, root-mean-square amplitude, and waveform length. Support vector machine was used as a classifier, distinguishing class labels corresponding to rest and imagined arm movements. The performance of the proposed method was evaluated using cross-validation. Average accuracies of 91.8 ± 5.8 and 90 ± 4.1 % were obtained for distinguishing rest versus grasping and rest versus elbow flexion. The results show that the proposed scheme provides 18.9, 17.1, and 16.5 % higher classification accuracies for distinguishing rest versus grasping and 21.9, 17.6, and 18.1 % higher classification accuracies for distinguishing rest versus elbow flexion compared with those obtained using filter bank common spatial pattern, band power, and common spatial pattern methods, respectively, which are widely used in the literature.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.