The combined effect of fundamental results about neurocognitive processes and advancements in decoding mental states from ongoing brain signals has brought forth a whole range of potential neurotechnological applications. In this article, we review our developments in this area and put them into perspective. These examples cover a wide range of maturity levels with respect to their applicability. While we assume we are still a long way away from integrating Brain-Computer Interface (BCI) technology in general interaction with computers, or from implementing neurotechnological measures in safety-critical workplaces, results have already now been obtained involving a BCI as research tool. In this article, we discuss the reasons why, in some of the prospective application domains, considerable effort is still required to make the systems ready to deal with the full complexity of the real world.
We revisit the framework for brain-coupled image search, where the Electroencephalography (EEG) channel under rapid serial visual presentation protocol is used to detect user preferences. Extending previous works on the synergy between content-based image labeling and EEG-based brain-computer interface (BCI), we propose a different perspective on iterative coupling. Previously, the iterations were used to improve the set of EEG-based image labels before propagating them to the unseen images for the final retrieval. In our approach we accumulate the evidence of the true labels for each image in the database through iterations. This is done by propagating the EEG-based labels of the presented images at each iteration to the rest of images in the database. Our results demonstrate a continuous improvement of the labeling performance across iterations despite the moderate EEG-based labeling (AUC <75%). The overall analysis is done in terms of the single-trial EEG decoding performance and the image database reorganization quality. Furthermore, we discuss the EEG-based labeling performance with respect to a search task given the same image database.
Our study demonstrates that the non-stationarity of visual scenes is an important factor in the evolution of cognitive processes, as well as in the dynamic of ocular behavior (i.e., dwell time and fixation duration) in an active search task. In addition, our method to improve single-trial detection performance in this adverse scenario is an important step in making brain-computer interfacing technology available for human-computer interaction applications.
Modern cars can support their drivers by assessing and performing autonomously different driving maneuvers, based on information gathered by in-car sensors. We propose that brain machine interfaces (BMIs) can provide complementary information that can ease the interaction with intelligent cars in order to enhance the driving experience. In our approach, the human remains in control, while a BMI is used to monitor the driver's cognitive state and use that information to modulate the assistance provided by the intelligent car. In this review, we gather our proof-of-concept studies demonstrating the feasibility of decoding electroencephalography (EEG) correlates of upcoming actions and those reflecting whether the decisions of driving assistant systems are in-line with the driver intentions. Experimental results while driving both simulated and real cars consistently showed neural signatures of anticipation, movement preparation and error processing. Remarkably, despite the increased noise inherent to real scenarios, these signals can be decoded on a single-trial basis, reflecting some of the cognitive processes that take place while driving. However, moderate decoding performance compared to the controlled experimental BMI paradigms indicate there exists room for improvement of the machine learning methods typically used in the state-ofthe-art BMIs. We foresee that fusion of neural correlates with information extracted from other physiological measures; e.g. eye movements or electromyography (EMG) as well as contextual information gathered by in-car sensors will allow intelligent cars to provide timely and tailored assistance only if it is required; thus keeping the user in the loop and allowing him to fully enjoy the driving experience.
Objective. Event Related Potentials (ERPs) reflecting cognitive response to external stimuli, are widely used in brain computer interfaces. ERP waveforms are characterized by a series of components of particular latency and amplitude. The classical ERP decoding methods exploit this waveform characteristic and thus achieve a high performance only if there is sufficient time-and phase-locking across trials. The required condition is not fulfilled if the experimental tasks are challenging or if it is needed to generalize across various experimental conditions. Features based on spatial covariances across channels can potentially overcome the latency jitter and delays since they aggregate the information across time. Approach. We compared the performance stability of waveform and covariance-based features as well as their combination in two simulated scenarios: 1) generalization across experiments on Error-related Potentials and 2) dealing with larger latency jitter across trials. Main results. The features based on spatial covariances provide a stable performance with a minor decline under jitter levels of up to ± 300 ms, whereas the decoding performance with waveform features quickly drops from 0.85 to 0.55 AUC. The generalization across ErrP experiments also resulted in a significantly more stable performance with covariance-based features. Significance. The results confirmed our hypothesis that covariance-based features can be used to: 1) classify more reliably ERPs with higher intrinsic variability in more challenging real-life applications and 2) generalize across related experimental protocols.
The paper considers the heart rate variability from the fractal and multifractal standpoints. The short-term interbeat intervals for arrhythmia patients, recorded before, after and during the amiodarone therapy, have been analyzed.
Objective. In contrast to the classical visual brain–computer interface (BCI) paradigms, which adhere to a rigid trial structure and restricted user behavior, electroencephalogram (EEG)-based visual recognition decoding during our daily activities remains challenging. The objective of this study is to explore the feasibility of decoding the EEG signature of visual recognition in experimental conditions promoting our natural ocular behavior when interacting with our dynamic environment. Approach. In our experiment, subjects visually search for a target object among suddenly appearing objects in the environment while driving a car-simulator. Given that subjects exhibit an unconstrained overt visual behavior, we based our study on eye fixation-related potentials (EFRPs). We report on gaze behavior and single-trial EFRP decoding performance (fixations on visually similar target vs. non-target objects). In addition, we demonstrate the application of our approach in a closed-loop BCI setup. Main results. To identify the target out of four symbol types along a road segment, the BCI system integrated decoding probabilities of multiple EFRP and achieved the average online accuracy of 0.37 ± 0.06 (12 subjects), statistically significantly above the chance level. Using the acquired data, we performed a comparative study of classification algorithms (discriminating target vs. non-target) and feature spaces in a simulated online scenario. The EEG approaches yielded similar moderate performances of at most 0.6 AUC, yet statistically significantly above the chance level. In addition, the gaze duration (dwell time) appears to be an additional informative feature in this context. Significance. These results show that visual recognition of sudden events can be decoded during active driving. Therefore, this study lays a foundation for assistive and recommender systems based on the driver’s brain signals.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.