The findings highlight the potential benefits of standing or active deskwork to the allocation of attentional resources and the regulation of stress.
Free communication is one of the cornerstones of modern civilisation. While manual keyboards currently allow us to interface with computers and manifest our thoughts, a next frontier is communication without manual input. Brain-computer interface (BCI) spellers often achieve this by decoding patterns of neural activity as users attend to flickering keyboard displays. To date, the highest performing spellers report typing rates of ~10.00 words/minute. While impressive, these rates are typically calculated for experienced users repetitively typing single phrases. It is therefore not clear whether naïve users are able to achieve such high rates with the added cognitive load of genuine free communication, which involves continuously generating and spelling novel words and phrases. In two experiments, we developed an open-source, high-performance, non-invasive BCI speller and examined its feasibility for free communication. The BCI speller required users to focus their visual attention on a flickering keyboard display, thereby producing unique cortical activity patterns for each key, which were decoded using filter-bank canonical correlation analysis. In Experiment 1, we tested whether seventeen naïve users could maintain rapid typing during prompted free word association. We found that information transfer rates were indeed slower during this free communication task than during typing of a cued character sequence. In Experiment 2, we further evaluated the speller’s efficacy for free communication by developing a messaging interface, allowing users to engage in free conversation. The results showed that free communication was possible, but that information transfer was reduced by voluntary textual corrections and turn-taking during conversation. We evaluated a number of factors affecting the suitability of BCI spellers for free communication, and make specific recommendations for improving classification accuracy and usability. Overall, we found that developing a BCI speller for free communication requires a focus on usability over reduced character selection time, and as such, future performance appraisals should be based on genuine free communication scenarios.
The ability to coordinate approach and avoidance actions in dynamic environments represents the boundary between extinction and the continued survival of many animal species. It is therefore crucial that sensory systems allocate limited attentional resources to the most relevant information to facilitate planning and execution of appropriate actions. Prominent theories of how attention regulates visual processing focus on the distinction between behaviorally relevant and irrelevant visual inputs. To date, however, no study has directly compared the deployment of attention to visual inputs relevant for approach and avoidance behaviors, which naturally occur in dynamic, interactive environments. In two experiments, we combined electroencephalography, frequency tagging, and eye gaze measures to investigate whether the deployment of visual selective attention differs for items relevant for approach and avoidance actions. Participants maneuvered a cursor to approach and avoid contact with moving items in a continuous interactive task. The results indicated that while the approach and avoidance tasks recruited equivalent attentional resources overall, attentional biases were directed toward task-relevant items during approach, and away from task-relevant items during avoidance. We conclude that the deployment of visual attention is guided not only by relevance to a behavioral goal, but also by the nature of that goal.
It is often necessary for individuals to coordinate their actions with others. In the real world, joint actions rely on the direct observation of co-actors and rhythmic cues. But how are joint actions coordinated when such cues are unavailable? To address this question, we recorded brain activity while pairs of participants guided a cursor to a target either individually (solo control) or together with a partner (joint control) from whom they were physically and visibly separated. Behavioural patterns revealed that joint action involved real-time coordination between co-actors and improved accuracy for the lower performing co-actor. Concurrent neural recordings and eye tracking revealed that joint control affected cognitive processing across multiple stages. Joint control involved increases in both behavioural and neural coupling – both quantified as interpersonal correlations – peaking at action completion. Correspondingly, a neural offset response acted as a mechanism for and marker of interpersonal neural coupling, underpinning successful joint actions.
Neuroimaging data analysis often requires purpose-built software, which can be difficult to install and may produce different results across computing environments. Beyond being a roadblock to neuroscientists, these issues of accessibility and portability can hamper the reproducibility of neuroimaging data analysis pipelines. Here, we introduce the Neurodesk platform, which offers a sustainable, flexible solution; harnessing software containers to support a comprehensive and growing suite of neuroimaging software (https://www.neurodesk.org/). Neurodesk includes both a browser-accessible virtual desktop environment and a command line interface, mediating access to containerised neuroimaging software libraries from multiple systems; including personal computers, cloud computing, high-performance computers, and Jupyter notebooks. This community-driven, open-source platform represents a paradigm shift for neuroimaging data analysis, allowing for accessible, fully reproducible and portable data analysis pipelines, which can be redeployed in perpetuity, in any computing environment, with ease.
Keywordsamyotrophic lateral sclerosis, anarthria, BCI speller, brain-machine interface, computer keyboards, electroencephalography (EEG), filter-bank canonical correlation analysis (CCA), hyperscanning, open science, neuroimaging, quadriplegia, steady-state visual evoked potential (SSVEP), virtual reality (VR), visual selective attention AbstractFree and open communication is fundamental to modern life. Brain-computer interfaces (BCIs), which translate measurements of the user's brain activity into computer commands, present emerging forms of hands-free communication. BCI communication systems have long been used in clinical settings for patients with paralysis and other motor disorders, and yet have not been implemented for free communication between healthy, BCI-naïve users.Here, in two studies, we developed and validated a high-performance non-invasive BCI communication system, and examined its feasibility for communication during free word association and unprompted free conversation. Our system, focusing on usability for free communication, produced information transfer rates sufficient and practical for free association and brain-to-brain conversation (~5.7 words/minute). Our findings suggest that performance appraisals for BCI systems should incorporate the free communication scenarios for which they are ultimately intended. To facilitate free and open communication in healthy users and patients, we have made our source code and data open access. OverviewFree and open communication is fundamental to modern life, scientific enterprise and democratic discourse. The rising prevalence of technologies such as virtual/augmented reality [1][2][3][4] and artificial intelligence [5; 6] has created new opportunities for hands-free communication and control. Brain-computer interfaces (BCI), which translate measurements of the user's brain activity into computer commands to control external devices [7][8][9][10][11] , present emerging forms of hands-free communication. BCI spellers are virtual keyboards that decode brain activity patterns allowing users to select characters in sequence to spell words and, ultimately, freely communicate [12] . BCI keyboards mimic manual keyboards, which extend the user by allowing them to physically manifest their real-time thoughts, interface with the internet and communicate remotely. BCI communication systems, including spellers, have long been used in clinical settings to facilitate communication in cases of quadriplegia, anarthria and amyotrophic lateral sclerosis [13][14][15] . These systems are often developed using electroencephalography (EEG), which allows for portable, flexible and affordable devices [16] . BCI has the potential to revolutionise communication, and yet its potential for creating free communication in healthy users is largely unexplored [17] . Here we introduce a new and efficient non-invasive system, using sparse-electrode electroencephalography (EEG), that allows free communication between individuals based on real-time brain activity decoding.Remarkable pro...
Brain-computer interfaces (BCIs) are a rapidly expanding field of study and require accurate and reliable real-time decoding of patterns of neural activity. These protocols often exploit selective attention, a neural mechanism that prioritises the sensory processing of task-relevant stimulus features (feature-based attention) or task-relevant spatial locations (spatial attention). Within the visual modality, attentional modulation of neural responses to different inputs is well indexed by steady-state visual evoked potentials (SSVEPs). These signals are reliably present in single-trial electroencephalography (EEG) data, are largely resilient to common EEG artifacts, and allow separation of neural responses to numerous concurrently presented visual stimuli. To date, efforts to use single-trial SSVEPs to classify visual attention for BCI control have largely focused on spatial attention rather than feature-based attention. Here, we present a dataset that allows for the development and benchmarking of algorithms to classify feature-based attention using single-trial EEG data. The dataset includes EEG and behavioural responses from 30 healthy human participants who performed a feature-based motion discrimination task on frequency tagged visual stimuli.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.