A Brain–Computer Interface (BCI) provides a novel non-muscular communication method via brain signals. A BCI-speller can be considered as one of the first published BCI applications and has opened the gate for many advances in the field. Although many BCI-spellers have been developed during the last few decades, to our knowledge, no reviews have described the different spellers proposed and studied in this vital field. The presented speller systems are categorized according to major BCI paradigms: P300, steady-state visual evoked potential (SSVEP), and motor imagery (MI). Different BCI paradigms require specific electroencephalogram (EEG) signal features and lead to the development of appropriate Graphical User Interfaces (GUIs). The purpose of this review is to consolidate the most successful BCI-spellers published since 2010, while mentioning some other older systems which were built explicitly for spelling purposes. We aim to assist researchers and concerned individuals in the field by illustrating the highlights of different spellers and presenting them in one review. It is almost impossible to carry out an objective comparison between different spellers, as each has its variables, parameters, and conditions. However, the gathered information and the provided taxonomy about different BCI-spellers can be helpful, as it could identify suitable systems for first-hand users, as well as opportunities of development and learning from previous studies for BCI researchers.
Brain-Computer Interface (BCI) systems use brain activity as an input signal and enable communication without requiring bodily movement. This novel technology may help impaired patients and users with disabilities to communicate with their environment. Over the years, researchers investigated the performance of subjects in different BCI paradigms, stating that 15%–30% of BCI users are unable to reach proficiency in using a BCI system and therefore were labelled as BCI illiterates. Recent progress in the BCIs based on the visually evoked potentials (VEPs) necessitates re-considering of this term, as very often all subjects are able to use VEP-based BCI systems. This study examines correlations among BCI performance, personal preferences, and further demographic factors for three different modern visually evoked BCI paradigms: (1) the conventional Steady-State Visual Evoked Potentials (SSVEPs) based on visual stimuli flickering at specific constant frequencies (fVEP), (2) Steady-State motion Visual Evoked Potentials (SSmVEP), and (3) code-modulated Visual Evoked Potentials (cVEP). Demographic parameters, as well as handedness, vision correction, BCI experience, etc., have no significant effect on the performance of VEP-based BCI. Most subjects did not consider the flickering stimuli annoying, only 20 out of a total of 86 participants indicated a change in fatigue during the experiment. 83 subjects were able to successfully finish all spelling tasks with the fVEP speller, with a mean (SD) information transfer rate of 31.87 bit/min (9.83) and an accuracy of 95.28% (5.18), respectively. Compared to that, 80 subjects were able to successfully finish all spelling tasks using SSmVEP, with a mean information transfer rate of 26.44 bit/min (8.04) and an accuracy of 91.10% (6.01), respectively. Finally, all 86 subjects were able to successfully finish all spelling tasks with the cVEP speller, with a mean information transfer rate of 40.23 bit/min (7.63) and an accuracy of 97.83% (3.37).
Keyboards and smartphones allow users to express their thoughts freely via manual control. Hands-free communication can be realized with brain–computer interfaces (BCIs) based on code-modulated visual evoked potentials (c-VEPs). Various variations of such spellers have been developed: Low-target systems, multi-target systems and systems with dictionary support. In general, it is not clear which kinds of systems are optimal in terms of reliability, speed, cognitive load, and visual load. The presented study investigates the feasibility of different speller variations. 58 users tested a 4-target speller and a 32-target speller with and without dictionary functionality. For classification, multiple individualized spatial filters were generated via canonical correlation analysis (CCA). We used an asynchronous implementation allowing non-control state, thus aiming for high accuracy rather than speed. All users were able to control the tested spellers. Interestingly, no significant differences in accuracy were found: 94.4%, 95.5% and 94.0% for 4-target spelling, 32-target spelling, and dictionary-assisted 32-target spelling. The mean ITRs were highest for the 32-target interface: 45.2, 96.9 and 88.9 bit/min. The output speed in characters per minute, was highest in dictionary-assisted spelling: 8.2, 19.5 and 31.6 characters/min. According to questionnaire results, 86% of the participants preferred the 32-target speller over the 4-target speller.
Responsive EEG-based communication systems have been implemented with brain-computer interfaces (BCIs) based on code-modulated visual evoked potentials (c-VEPs). The BCI targets are typically encoded with binary m-sequences because of their autocorrelation property; the digits one and zero correspond to different target colours (usually black and white), which are updated every frame according to the code. While binary flickering patterns enable high communication speeds, they are perceived as annoying by many users. Quintary (base 5) m-sequences, where the five digits correspond to different shades of grey, may yield a more subtle visual stimulation. This study explores two approaches to reduce the flickering sensation: (1) adjusting the flickering speed via refresh rates and (2) applying quintary codes. In this respect, six flickering modalities are tested using an eight-target spelling application: binary patterns and quintary patterns generated with 60, 120, and 240 Hz refresh rates. This study was conducted with 18 nondisabled participants. For all six flickering modalities, a copy-spelling task was conducted. According to questionnaire results, most users favoured the proposed quintary over the binary pattern while achieving similar performance to it (no statistical differences between the patterns were found). Mean accuracies across participants were above 95%, and information transfer rates were above 55 bits/min for all patterns and flickering speeds.
Brain–computer interfaces (BCIs) measure brain activity and translate it to control computer programs or external devices. However, the activity generated by the BCI makes measurements for objective fatigue evaluation very difficult, and the situation is further complicated due to different movement artefacts. The BCI performance could be increased if an online method existed to measure the fatigue objectively and accurately. While BCI-users are moving, a novel automatic online artefact removal technique is used to filter out these movement artefacts. The effects of this filter on BCI performance and mainly on peak frequency detection during BCI use were investigated in this paper. A successful peak alpha frequency measurement can lead to more accurately determining objective user fatigue. Fifteen subjects performed various imaginary and actual movements in separate tasks, while fourteen electroencephalography (EEG) electrodes were used. Afterwards, a steady-state visual evoked potential (SSVEP)-based BCI speller was used, and the users were instructed to perform various movements. An offline curve fitting method was used for alpha peak detection to assess the effect of the artefact filtering. Peak detection was improved by the filter, by finding 10.91% and 9.68% more alpha peaks during simple EEG recordings and BCI use, respectively. As expected, BCI performance deteriorated from movements, and also from artefact removal. Average information transfer rates (ITRs) were 20.27 bit/min, 16.96 bit/min, and 14.14 bit/min for the (1) movement-free, (2) the moving and unfiltered, and (3) the moving and filtered scenarios, respectively.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.