System calibration and user training are essential for operating motor imagery based brain-computer interface (BCI) systems. These steps are often unintuitive and tedious for the user, and do not necessarily lead to a satisfactory level of control. We present an Adaptive BCI framework that provides feedback after only minutes of autocalibration in a two-class BCI setup. During operation, the system recurrently reselects only one out of six predefined logarithmic bandpower features (10-13 and 16-24 Hz from Laplacian derivations over C3, Cz, and C4), specifically, the feature that exhibits maximum discriminability. The system then retrains a linear discriminant analysis classifier on all available data and updates the online paradigm with the new model. Every retraining step is preceded by an online outlier rejection. Operating the system requires no engineering knowledge other than connecting the user and starting the system. In a supporting study, ten out of twelve novice users reached a criterion level of above 70% accuracy in one to three sessions (10-80 min online time) of training, with a median accuracy of 80.2 ± 11.3% in the last session. We consider the presented system a positive first step towards fully autocalibrating motor imagery BCIs.
Objective. Steady-State Visual Evoked Potentials (SSVEPs) are neural oscillations from the parietal and occipital regions of the brain that are evoked from flickering visual stimuli. SSVEPs are robust signals measurable in the electroencephalogram (EEG) and are commonly used in brain-computer interfaces (BCIs). However, methods for high-accuracy decoding of SSVEPs usually require hand-crafted approaches that leverage domain-specific knowledge of the stimulus signals, such as specific temporal frequencies in the visual stimuli and their relative spatial arrangement. When this knowledge is unavailable, such as when SSVEP signals are acquired asynchronously, such approaches tend to fail. Approach. In this paper, we show how a compact convolutional neural network (Compact-CNN), which only requires raw EEG signals for automatic feature extraction, can be used to decode signals from a 12-class SSVEP dataset without the need for user-specific calibration. Main results. The Compact-CNN demonstrates across subject mean accuracy of approximately 80 %, out-performing current state-of-the-art, handcrafted approaches using canonical correlation analysis (CCA) and Combined-CCA. Furthermore, the Compact-CNN approach can reveal the underlying feature representation, revealing that the deep learner extracts additional phase-and amplitude-related features associated with the structure of the dataset. Significance. We discuss how our Compact-CNN shows promise for BCI applications that allow users to freely gaze/attend to any stimulus at any time (e.g., asynchronous BCI) as well as provides a method for analyzing SSVEP signals in a way that might augment our understanding about the basic processing in the visual cortex.Evoked potentials are robust signals in the electroencephalogram (EEG) induced by sensory stimuli, and they have been used to study normal and abnormal function of the sensory cortex [1]. The most well-studied of these are Steady-State Visual Evoked Potentials (SSVEPs), which are neural oscillations in the visual cortex that are evoked from stimuli that temporally flicker in a narrow frequency band [2,3]. SSVEPs likely arise from a reorganization of spontaneous intrinsic brain oscillations in response to a stimulus [4]. Paradigms leveraging SSVEP responses have been used to investigate the organization of the visual system [5,6], identify biomarkers of disease and sensory function [7][8][9], and probe visual perception [10,11].The robustness of SSVEP has enabled its use as a control signal for brain computer interfaces (BCIs) that enable low-bandwith communication for individuals with catastrophic loss of motor functions, bypassing neuro-muscular pathways and establishing a communication link directly to the brain [12,13]. In a typical SSVEP BCI, a patient/subject is presented with a grid of squares on a computer monitor, where each square contains semantic information such as a letter, number, character, or action. Superimposed on these squares are visual flicker frequencies that uniquely "tag" each square, thus mapp...
Our state of arousal can significantly affect our ability to make optimal decisions, judgments, and actions in real-world dynamic environments. The Yerkes–Dodson law, which posits an inverse-U relationship between arousal and task performance, suggests that there is a state of arousal that is optimal for behavioral performance in a given task. Here we show that we can use online neurofeedback to shift an individual’s arousal from the right side of the Yerkes–Dodson curve to the left toward a state of improved performance. Specifically, we use a brain–computer interface (BCI) that uses information in the EEG to generate a neurofeedback signal that dynamically adjusts an individual’s arousal state when they are engaged in a boundary-avoidance task (BAT). The BAT is a demanding sensory-motor task paradigm that we implement as an aerial navigation task in virtual reality and which creates cognitive conditions that escalate arousal and quickly results in task failure (e.g., missing or crashing into the boundary). We demonstrate that task performance, measured as time and distance over which the subject can navigate before failure, is significantly increased when veridical neurofeedback is provided. Simultaneous measurements of pupil dilation and heart-rate variability show that the neurofeedback indeed reduces arousal. Our work demonstrates a BCI system that uses online neurofeedback to shift arousal state and increase task performance in accordance with the Yerkes–Dodson law.
Highlights1. We conduct a feasibility study with 14 individuals with cerebral palsy (CP) to evaluate their control of two online Brain-computer interfaces. 2. Eight of the individuals with CP were able to control at least one of the BCIs at a statistically significant level of accuracy. 3. Analysis of the results reveals that BCIs may be controlled by some individuals with CP. Abstract ObjectiveBrain-computer interfaces (BCIs) have been proposed as a potential assistive device for individuals with cerebral palsy (CP) to assist with their communication needs. However, it is unclear how well-suited BCIs are to individuals with CP. Therefore, this study aims to investigate to what extent these users are able to gain control of BCIs. MethodsThis study is conducted with 14 individuals with CP attempting to control two standard online BCIs (1) based upon sensorimotor rhythm modulations, and (2) based upon steady state visual evoked potentials.Email address: reinhold.scherer@tugraz.at (Reinhold Scherer*) Preprint submitted to Clinical NeurophysiologyMarch 27, 2013 ResultsOf the 14 users, 8 are able to use one or other of the BCIs, online, with a statistically significant level of accuracy, without prior training. Classification results are driven by neurophysiological activity and not seen to correlate with occurrences of artifacts. However, many of these users' accuracies, while statistically significant, would require either more training or more advanced methods before practical BCI control would be possible. ConclusionsThe results indicate that BCIs may be controlled by individuals with CP but that many issues need to be overcome before practical application use may be achieved. SignificanceThis is the first study to assess the ability of a large group of different individuals with CP to gain control of an online BCI system. The results indicate that six users could control a sensorimotor rhythm BCI and three a steady state visual evoked potential BCI at statistically significant levels of accuracy (SMR accuracies; mean ± STD, 0.821 ± 0.116, SSVEP accuracies; 0.422 ± 0.069).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.