Despite significant recent progress in the area of Brain-Computer Interface (BCI), there are numerous shortcomings associated with collecting Electroencephalography (EEG) signals in real-world environments. These include, but are not limited to, subject and session data variance, long and arduous calibration processes and predictive generalisation issues across different subjects or sessions. This implies that many downstream applications, including Steady State Visual Evoked Potential (SSVEP) based classification systems, can suffer from a shortage of reliable data. Generating meaningful and realistic synthetic data can therefore be of significant value in circumventing this problem. We explore the use of modern neural-based generative models trained on a limited quantity of EEG data collected from different subjects to generate supplementary synthetic EEG signal vectors, subsequently utilised to train an SSVEP classifier. Extensive experimental analysis demonstrates the efficacy of our generated data, leading to improvements across a variety of evaluations, with the crucial task of cross-subject generalisation improving by over 35% with the use of such synthetic data.
Electroencephalography (EEG) is a common signal acquisition approach employed for Brain-Computer Interface (BCI) research. Nevertheless, the majority of EEG acquisition devices rely on the cumbersome application of conductive gel (so-called wet-EEG) to ensure a high quality signal is obtained. However, this process is unpleasant for the experimental participants and thus limits the practical application of BCI. In this work, we explore the use of a commercially available dry-EEG headset to obtain visual cortical ensemble signals. Whilst improving the usability of EEG within the BCI context, dry-EEG suffers from inherently reduced signal quality due to the lack of conduit gel, making the classification of such signals significantly more challenging. In this paper, we propose a novel Convolutional Neural Network (CNN) approach for the classification of raw dry-EEG signals without any data pre-processing. To illustrate the effectiveness of our approach, we utilise the Steady State Visual Evoked Potential (SSVEP) paradigm as our use case. SSVEP can be utilised to allow people with severe physical disabilities such as Complete Locked-In Syndrome or Amyotrophic Lateral Sclerosis to be aided via BCI applications, as it requires only the subject to fixate upon the sensory stimuli of interest. Here we utilise SSVEP flicker frequencies between 10 to 30 Hz, which we record as subject cortical waveforms via the dry-EEG headset. Our proposed end-to-end CNN allows us to automatically and accurately classify SSVEP stimulation directly from the dry-EEG waveforms. Our CNN architecture utilises a common SSVEP Convolutional Unit (SCU), comprising of a 1D convolutional layer, batch normalization and max pooling. Furthermore, we compare several deep learning neural network variants with our primary CNN architecture, in addition to traditional machine learning classification approaches. Experimental evaluation shows our CNN architecture to be significantly better than competing approaches, achieving a classification accuracy of 96% whilst demonstrating superior cross-subject performance and even being able to generalise well to unseen subjects whose data is entirely absent from the training process.
Brain-computer interfaces (BCI) harnessing Steady State Visual Evoked Potentials (SSVEP) manipulate the frequency and phase of visual stimuli to generate predictable oscillations in neural activity. For BCI spellers, oscillations are matched with alphanumeric characters allowing users to select target numbers and letters. Advances in BCI spellers can, in part, be accredited to subject-specific optimization, including; 1) custom electrode arrangements, 2) filter sub-band assessments and 3) stimulus parameter tuning. Here we apply deep convolutional neural networks (DCNN) demonstrating cross-subject functionality for the classification of frequency and phase encoded SSVEP. Electroencephalogram (EEG) data are collected and classified using the same parameters across subjects. Subjects fixate forty randomly cued flickering characters (5 × 8 keyboard array) during concurrent wet-EEG acquisition. These data are provided by an open source SSVEP dataset. Our proposed DCNN, PodNet, achieves 86% and 77% offline Accuracy of Classification acrosssubjects for two data capture periods, respectively, 6-seconds (information transfer rate=40bpm) and 2-seconds (information transfer rate= 101bpm). Subjects demonstrating sub-optimal (<70%) performance are classified to similar levels after a short subject-specific training period. PodNet outperforms filter-bank canonical correlation analysis (FBCCA) for a low volume (3channel) clinically feasible occipital electrode configuration. The networks defined in this study achieve functional performance for the largest number of SSVEP classes decoded via DCNN to date. Our results demonstrate PodNet achieves cross-subject, calibrationless classification and adaptability to sub-optimal subject data and low-volume EEG electrode arrangements.
2019) 'Using variable natural environment brain-computer interface stimuli for real-time humanoid robot navigation.any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. Additional information: Use policyThe full-text may be used and/or reproduced, and given to third parties in any format or medium, without prior permission or charge, for personal research or study, educational, or not-for-prot purposes provided that:• a full bibliographic reference is made to the original source • a link is made to the metadata record in DRO • the full-text is not changed in any way The full-text must not be sold in any format or medium without the formal permission of the copyright holders.Please consult the full DRO policy for further details.Abstract-This paper addresses the challenge of humanoid robot teleoperation in a natural indoor environment via a Brain-Computer Interface (BCI). We leverage deep Convolutional Neural Network (CNN) based image and signal understanding to facilitate both real-time object detection and dry-Electroencephalography (EEG) based human cortical brain bio-signals decoding. We employ recent advances in dry-EEG technology to stream and collect the cortical waveforms from subjects while they fixate on variable Steady State Visual Evoked Potential (SSVEP) stimuli generated directly from the environment the robot is navigating. To these ends, we propose the use of novel variable BCI stimuli by utilising the real-time video streamed via the on-board robot camera as visual input for SSVEP, where the CNN detected natural scene objects are altered and flickered with differing frequencies (10Hz, 12Hz and 15Hz). These stimuli are not akin to traditional stimulias both the dimensions of the flicker regions and their on-screen position changes depending on the scene objects detected. Onscreen object selection via such a dry-EEG enabled SSVEP methodology, facilitates the on-line decoding of human cortical brain signals, via a specialised secondary CNN, directly into teleoperation robot commands (approach object, move in a specific direction: right, left or back). This SSVEP decoding model is trained via a priori offline experimental data in which very similar visual input is present for all subjects. The resulting classification demonstrates high performance with mean accuracy of 85% for the real-time robot navigation experiment across multiple test subjects.
We investigate the performance of uncertainty quantification methods, namely deep ensembles and bootstrap resampling, for deep neural network (DNN) predictions of transition metal K-edge X-ray absorption near-edge structure (XANES) spectra....
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.