Electroencephalography (EEG) holds promise as a neuroimaging technology that can be used to understand how the human brain functions in real-world, operational settings while individuals move freely in perceptually-rich environments. In recent years, several EEG systems have been developed that aim to increase the usability of the neuroimaging technology in real-world settings. Here, the usability of three wireless EEG systems from different companies are compared to a conventional wired EEG system, BioSemi's ActiveTwo, which serves as an established laboratory-grade 'gold standard' baseline. The wireless systems compared include Advanced Brain Monitoring's B-Alert X10, Emotiv Systems' EPOC and the 2009 version of QUASAR's Dry Sensor Interface 10-20. The design of each wireless system is discussed in relation to its impact on the system's usability as a potential real-world neuroimaging system. Evaluations are based on having participants complete a series of cognitive tasks while wearing each of the EEG acquisition systems. This report focuses on the system design, usability factors and participant comfort issues that arise during the experimental sessions. In particular, the EEG systems are assessed on five design elements: adaptability of the system for differing head sizes, subject comfort and preference, variance in scalp locations for the recording electrodes, stability of the electrical connection between the scalp and electrode, and timing integration between the EEG system, the stimulus presentation computer and other external events.
As the proliferation of technology dramatically infiltrates all aspects of modern life, in many ways the world is becoming so dynamic and complex that technological capabilities are overwhelming human capabilities to optimally interact with and leverage those technologies.Fortunately, these technological advancements have also driven an explosion of neuroscience research over the past several decades, presenting engineers with a remarkable opportunity to design and develop flexible and adaptive brain-based neurotechnologies that integrate with and capitalize on human capabilities and limitations to improve human-system interactions. Major forerunners of this conception are brain-computer interfaces (BCIs), which to this point have been largely focused on improving the quality of life for particular clinical populations and include, for example, applications for advanced communications with paralyzed or "locked-in" patients as well as the direct control of prostheses and wheelchairs. Near-term applications are envisioned that are primarily task-oriented and are targeted to avoid the most difficult obstacles to development. In the farther term, a holistic approach to BCIs will enable a broad range of task-oriented and opportunistic applications by leveraging pervasive technologies and advanced analytical approaches to sense and merge critical brain, behavioral, task, and environmental information. Communications and other applications that are envisioned to be broadly impacted by BCIs are highlighted; however, these represent just a small sample of the potential of these technologies.
Recent studies have generated debate regarding whether reflexive attention mechanisms are triggered in a purely automatic stimulus-driven manner. Behavioral studies have found that a nonpredictive "cue" stimulus will speed manual responses to subsequent targets at the same location, but only if that cue is congruent with actively maintained top-down settings for target detection. When a cue is incongruent with top-down settings, response times are unaffected, and this has been taken as evidence that reflexive attention mechanisms were never engaged in those conditions. However, manual response times may mask effects on earlier stages of processing. Here, we used event-related potentials to investigate the interaction of bottom-up sensory-driven mechanisms and top-down control settings at multiple stages of processing in the brain. Our results dissociate sensory-driven mechanisms that automatically bias early stages of visual processing from later mechanisms that are contingent on top-down control. An early enhancement of target processing in the extrastriate visual cortex (i.e., the P1 component) was triggered by the appearance of a unique bright cue, regardless of top-down settings. The enhancement of visual processing was prolonged, however, when the cue was congruent with top-down settings. Later processing in posterior temporal-parietal regions (i.e., the ipsilateral invalid negativity) was triggered automatically when the cue consisted of the abrupt appearance of a single new object. However, in cases where more than a single object appeared during the cue display, this stage of processing was contingent on top-down control. These findings provide evidence that visual information processing is biased at multiple levels in the brain, and the results distinguish automatically triggered sensory-driven mechanisms from those that are contingent on top-down control settings.
Patterns of neural data obtained from electroencephalography (EEG) can be classified by machine learning techniques to increase human-system performance. In controlled laboratory settings this classification approach works well; however, transitioning these approaches into more dynamic, unconstrained environments will present several significant challenges. One such challenge is an increase in temporal variability in measured behavioral and neural responses, which often results in suboptimal classification performance. Previously, we reported a novel classification method designed to account for temporal variability in the neural response in order to improve classification performance by using sliding windows in hierarchical discriminant component analysis (HDCA), and demonstrated a decrease in classification error by over 50% when compared to the standard HDCA method (Marathe et al., 2013). Here, we expand upon this approach and show that embedded within this new method is a novel signal transformation that, when applied to EEG signals, significantly improves the signal-to-noise ratio and thereby enables more accurate single-trial analysis. The results presented here have significant implications for both brain-computer interaction technologies and basic science research into neural processes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.