Modern cars can support their drivers by assessing and performing autonomously different driving maneuvers, based on information gathered by in-car sensors. We propose that brain machine interfaces (BMIs) can provide complementary information that can ease the interaction with intelligent cars in order to enhance the driving experience. In our approach, the human remains in control, while a BMI is used to monitor the driver's cognitive state and use that information to modulate the assistance provided by the intelligent car. In this review, we gather our proof-of-concept studies demonstrating the feasibility of decoding electroencephalography (EEG) correlates of upcoming actions and those reflecting whether the decisions of driving assistant systems are in-line with the driver intentions. Experimental results while driving both simulated and real cars consistently showed neural signatures of anticipation, movement preparation and error processing. Remarkably, despite the increased noise inherent to real scenarios, these signals can be decoded on a single-trial basis, reflecting some of the cognitive processes that take place while driving. However, moderate decoding performance compared to the controlled experimental BMI paradigms indicate there exists room for improvement of the machine learning methods typically used in the state-ofthe-art BMIs. We foresee that fusion of neural correlates with information extracted from other physiological measures; e.g. eye movements or electromyography (EMG) as well as contextual information gathered by in-car sensors will allow intelligent cars to provide timely and tailored assistance only if it is required; thus keeping the user in the loop and allowing him to fully enjoy the driving experience.
Objective. Event Related Potentials (ERPs) reflecting cognitive response to external stimuli, are widely used in brain computer interfaces. ERP waveforms are characterized by a series of components of particular latency and amplitude. The classical ERP decoding methods exploit this waveform characteristic and thus achieve a high performance only if there is sufficient time-and phase-locking across trials. The required condition is not fulfilled if the experimental tasks are challenging or if it is needed to generalize across various experimental conditions. Features based on spatial covariances across channels can potentially overcome the latency jitter and delays since they aggregate the information across time. Approach. We compared the performance stability of waveform and covariance-based features as well as their combination in two simulated scenarios: 1) generalization across experiments on Error-related Potentials and 2) dealing with larger latency jitter across trials. Main results. The features based on spatial covariances provide a stable performance with a minor decline under jitter levels of up to ± 300 ms, whereas the decoding performance with waveform features quickly drops from 0.85 to 0.55 AUC. The generalization across ErrP experiments also resulted in a significantly more stable performance with covariance-based features. Significance. The results confirmed our hypothesis that covariance-based features can be used to: 1) classify more reliably ERPs with higher intrinsic variability in more challenging real-life applications and 2) generalize across related experimental protocols.
Objective. In contrast to the classical visual brain–computer interface (BCI) paradigms, which adhere to a rigid trial structure and restricted user behavior, electroencephalogram (EEG)-based visual recognition decoding during our daily activities remains challenging. The objective of this study is to explore the feasibility of decoding the EEG signature of visual recognition in experimental conditions promoting our natural ocular behavior when interacting with our dynamic environment. Approach. In our experiment, subjects visually search for a target object among suddenly appearing objects in the environment while driving a car-simulator. Given that subjects exhibit an unconstrained overt visual behavior, we based our study on eye fixation-related potentials (EFRPs). We report on gaze behavior and single-trial EFRP decoding performance (fixations on visually similar target vs. non-target objects). In addition, we demonstrate the application of our approach in a closed-loop BCI setup. Main results. To identify the target out of four symbol types along a road segment, the BCI system integrated decoding probabilities of multiple EFRP and achieved the average online accuracy of 0.37 ± 0.06 (12 subjects), statistically significantly above the chance level. Using the acquired data, we performed a comparative study of classification algorithms (discriminating target vs. non-target) and feature spaces in a simulated online scenario. The EEG approaches yielded similar moderate performances of at most 0.6 AUC, yet statistically significantly above the chance level. In addition, the gaze duration (dwell time) appears to be an additional informative feature in this context. Significance. These results show that visual recognition of sudden events can be decoded during active driving. Therefore, this study lays a foundation for assistive and recommender systems based on the driver’s brain signals.
Rivastigmine has been shown to improve cognition in HIV+ patients with minor neurocognitive disorders; however, the mechanisms underlying such beneficial effect are currently unknown. To assess whether rivastigmine therapy is associated with decreased brain inflammation and damage, we performed T1/T2* relaxometry and magnetization transfer imaging in 17 aviremic HIV+ patients with minor neurocognitive disorders enrolled on a crossed over randomized rivastigmine trial. Rivastigmine therapy was associated with changes in MRI metrics indicating a decrease in brain water content (i.e., edema reabsorption) and/or reduced demyelination/axonal damage. Furthermore, MRI changes correlated with cognitive improvement on rivastigmine therapy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.