Current BCIs have been designed as access methods for AAC rather than a replacement; therefore, SLPs can use existing knowledge in AAC as a starting point for clinical application. Additional training is recommended to stay updated with rapid advances in BCI.
We conducted a study of a motor imagery brain-computer interface (BCI) using electroencephalography to continuously control a formant frequency speech synthesizer with instantaneous auditory and visual feedback. Over a three-session training period, sixteen participants learned to control the BCI for production of three vowel sounds (/ textipa i/ [heed], / textipa A/ [hot], and / textipa u/ [who'd]) and were split into three groups: those receiving unimodal auditory feedback of synthesized speech, those receiving unimodal visual feedback of formant frequencies, and those receiving multimodal, audio-visual (AV) feedback. Audio feedback was provided by a formant frequency artificial speech synthesizer, and visual feedback was given as a 2-D cursor on a graphical representation of the plane defined by the first two formant frequencies. We found that combined AV feedback led to the greatest performance in terms of percent accuracy, distance to target, and movement time to target compared with either unimodal feedback of auditory or visual information. These results indicate that performance is enhanced when multimodal feedback is meaningful for the BCI task goals, rather than as a generic biofeedback signal of BCI progress.
Brain-computer interfaces (BCI) as assistive devices are designed to provide access to communication, navigation, locomotion and environmental interaction to individuals with severe motor impairment. In the present paper, we discuss two approaches to communication using a non-invasive BCI via recording of neurological activity related to motor imagery. The first approach uses modulations of the sensorimotor rhythm related to limb movement imagery to continuously modify the output of an artificial speech synthesizer. The second approach detects eventrelated changes to neurological activity during single trial motor imagery attempts to control a commercial augmentative and alternative communication device. These two approaches represent two extremes for BCI-based communication ranging from simple, "button-click" operation of a speech generating communication device to continuous modulation of an acoustic output speech synthesizer. The goal of developing along a continuum is to facilitate adoption and use of communication BCIs to a heterogeneous target user population.
Functional Near-Infrared Spectroscopy (fNIRS) is an innovative and promising neuroimaging modality for studying brain activity in real-world environments. While fNIRS has seen rapid advancements in hardware, software, and research applications since its emergence nearly 30 years ago, limitations still exist regarding all three areas, where existing practices contribute to greater bias within the neuroscience research community. We spotlight fNIRS through the lens of different end-application users, including the unique perspective of a fNIRS manufacturer, and report the challenges of using this technology across several research disciplines and populations. Through the review of different research domains where fNIRS is utilized, we identify and address the presence of bias, specifically due to the restraints of current fNIRS technology, limited diversity among sample populations, and the societal prejudice that infiltrates today's research. Finally, we provide resources for minimizing bias in neuroscience research and an application agenda for the future use of fNIRS that is equitable, diverse, and inclusive.
Purpose Brain–computer interface (BCI) techniques may provide computer access for individuals with severe physical impairments. However, the relatively hidden nature of BCI control obscures how BCI systems work behind the scenes, making it difficult to understand “how” electroencephalography (EEG) records the BCI-related brain signals, “what” brain signals are recorded by EEG, and “why” these signals are targeted for BCI control. Furthermore, in the field of speech-language-hearing, signals targeted for BCI application have been of primary interest to clinicians and researchers in the area of augmentative and alternative communication (AAC). However, signals utilized for BCI control reflect sensory, cognitive, and motor processes, which are of interest to a range of related disciplines, including speech science. Method This tutorial was developed by a multidisciplinary team emphasizing primary and secondary BCI-AAC–related signals of interest to speech-language-hearing. Results An overview of BCI-AAC–related signals are provided discussing (a) “how” BCI signals are recorded via EEG; (b) “what” signals are targeted for noninvasive BCI control, including the P300, sensorimotor rhythms, steady-state evoked potentials, contingent negative variation, and the N400; and (c) “why” these signals are targeted. During tutorial creation, attention was given to help support EEG and BCI understanding for those without an engineering background. Conclusion Tutorials highlighting how BCI-AAC signals are elicited and recorded can help increase interest and familiarity with EEG and BCI techniques and provide a framework for understanding key principles behind BCI-AAC design and implementation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.