This work presents the design, implementation, and evaluation of a P300-based brain-machine interface (BMI) developed to control a robotic hand-orthosis. The purpose of this system is to assist patients with amyotrophic lateral sclerosis (ALS) who cannot open and close their hands by themselves. The user of this interface can select one of six targets, which represent the flexion-extension of one finger independently or the movement of the five fingers simultaneously. We tested offline and online our BMI on eighteen healthy subjects (HS) and eight ALS patients. In the offline test, we used the calibration data of each participant recorded in the experimental sessions to estimate the accuracy of the BMI to classify correctly single epochs as target or non-target trials. On average, the system accuracy was 78.7% for target epochs and 85.7% for non-target trials. Additionally, we observed significant P300 responses in the calibration recordings of all the participants, including the ALS patients. For the BMI online test, each subject performed from 6 to 36 attempts of target selections using the interface. In this case, around 46% of the participants obtained 100% of accuracy, and the average online accuracy was 89.83%. The maximum information transfer rate (ITR) observed in the experiments was 52.83 bit/min, whereas that the average ITR was 18.13 bit/min. The contributions of this work are the following. First, we report the development and evaluation of a mind-controlled robotic hand-orthosis for patients with ALS. To our knowledge, this BMI is one of the first P300-based assistive robotic devices with multiple targets evaluated on people with ALS. Second, we provide a database with calibration data and online EEG recordings obtained in the evaluation of our BMI. This data is useful to develop and compare other BMI systems and test the processing pipelines of similar applications.
This work involved human subjects or animals in its research. Approval of all ethical and experimental procedures and protocols was granted by the Research and Ethical Committees of the National Institute of Rehabilitation ''LGII'' under Application No. 08/19, and performed in line with the Declaration of Helsinki.
Currently, one of the challenges in EEG-based brain-computer interfaces (BCI) for neurorehabilitation is the recognition of the intention to perform different movements from the same limb. This would allow finer control of neurorehabilitation and motor recovery devices by end-users. To address this issue, we assess the feasibility of recognizing two rehabilitative right upper-limb movements from premovement EEG signals. These rehabilitative movements were performed self-selected and self-initiated by the users using a motor rehabilitation robotic device. This work proposes anticipatory detection scenarios that discriminate EEG signals corresponding to non-movement state and movement intentions of two samelimb movements. The studied movements were discriminated above the empirical chance levels for all proposed detection scenarios. Percentages of correctly anticipated trials ranged from 64.3% to 77.0%, and the detection times ranged from 620 to 300 ms prior to movement initiation. The results of these studies indicate that it is possible to detect the intention to perform two different movements of the same upper limb and non-movement state. Based on these results, the decoding of the movement intention could potentially be used to develop more natural and intuitive robot-assisted neurorehabilitation therapies.
Motor imagery (MI)-based brain-computer interface (BCI) systems have shown promising advances for lower limb motor rehabilitation. The purpose of this study was to develop an MI-based BCI for the actions of standing and sitting. Thirty-two healthy subjects participated in the study using 17 active EEG electrodes. We used a combination of the filter bank common spatial pattern (FBCSP) method and the regularized linear discriminant analysis (RLDA) technique for decoding EEG rhythms offline and online during motor imagery for standing and sitting. The offline analysis indicated the classification of motor imagery and idle state provided a mean accuracy of 88.51 ± 1.43% and 85.29 ± 1.83% for the sit-to-stand and stand-to-sit transitions, respectively. The mean accuracies of the sit-to-stand and stand-to-sit online experiments were 94.69 ± 1.29% and 96.56 ± 0.83%, respectively. From these results, we believe that the MI-based BCI may be useful to future brain-controlled standing systems.
The P300 paradigm is one of the most promising techniques for its robustness and reliability in Brain-Computer Interface (BCI) applications, but it is not exempt from shortcomings. The present work studied single-trial classification effectiveness in distinguishing between target and non-target responses considering two conditions of visual stimulation and the variation of the number of symbols presented to the user in a single-option visual frame. In addition, we also investigated the relationship between the classification results of target and non-target events when training and testing the machine-learning model with datasets containing different stimulation conditions and different number of symbols. To this end, we designed a P300 experimental protocol considering, as conditions of stimulation: the color highlighting or the superimposing of a cartoon face and from four to nine options. These experiments were carried out with 19 healthy subjects in 3 sessions. The results showed that the Event-Related Potentials (ERP) responses and the classification accuracy are stronger with cartoon faces as stimulus type and similar irrespective of the amount of options. In addition, the classification performance is reduced when using datasets with different type of stimulus, but it is similar when using datasets with different the number of symbols. These results have a special connotation for the design of systems, in which it is intended to elicit higher levels of evoked potentials and, at the same time, optimize training time.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.