Objective: Despite the effective application of deep learning in brain-computer interface (BCI) systems, the successful execution of this technique especially for inter-subject classification in cognitive BCI has not been accomplished yet. In this paper, we propose a framework based on deep convolutional neural network (CNN) to detect attentive mental state from single-channel raw electroencephalography (EEG) data. Approach: We develop an end-to-end deep CNN to decode the attentional information from EEG time-series. We also explore the consequence of input representations on the performance of deep CNN by feeding three different EEG representations into the network. To ensure the practical application of the proposed framework and avoid time-consuming re-trainings, we perform inter-subject transfer learning techniques as classification strategy. Eventually, to interpret the learned attentional patterns, we visualize and analyze the network perception of attention and nonattention classes. Main results: The average classification accuracy is 79.26% with only 15.83% of 120 subjects having the accuracy below 70% (a generally accepted threshold for BCI). This is while with inter-subject approach, it is literally hard to output high classification accuracy. This end-to-end classification framework surpasses the conventional classification methods for attention detection. The visualization results validate that the learned patterns from raw data are meaningful. Significance: This framework significantly improves the attention detection accuracy with inter-subject classification. Moreover, this study sheds light into the research on end-to-end learning; the proposed network is capable to learn from raw data with the least amount of pre-processing which in turn eliminates the extensive computational load of time-consuming data preparation and feature extraction.
Commercial 3D scene acquisition systems such as the Microsoft Kinect sensor can reduce the cost barrier of realizing mid-air interaction. However, since it can only sense hand position but not hand orientation robustly, current mid-air interaction methods for 3D virtual object manipulation often require contextual and mode switching to perform translation, rotation, and scaling, thus preventing natural continuous gestural interactions. A novel handle bar metaphor is proposed as an effective visual control metaphor between the user's hand gestures and the corresponding virtual object manipulation operations. It mimics a familiar situation of handling objects that are skewered with a bimanual handle bar. The use of relative 3D motion of the two hands to design the mid-air interaction allows us to provide precise controllability despite the Kinect sensor's low image resolution. A comprehensive repertoire of 3D manipulation operations is proposed to manipulate single objects, perform fast constrained rotation, and pack/align multiple objects along a line. Three user studies were devised to demonstrate the efficacy and intuitiveness of the proposed interaction techniques on different virtual manipulation scenarios.
Measuring attention from electroencephalogram (EEG) has found applications in the treatment of Attention Deficit Hyperactivity Disorder (ADHD). It is of great interest to understand what features in EEG are most representative of attention. Intensive research has been done in the past and it has been proven that frequency band powers and their ratios are effective features in detecting attention. However, there are still unanswered questions, like, what features in EEG are most discriminative between attentive and non-attentive states? Are these features common among all subjects or are they subject-specific and must be optimized for each subject? Using Mutual Information (MI) to perform subject-specific feature selection on a large data set including 120 ADHD children, we found that besides theta beta ratio (TBR) which is commonly used in attention detection and neurofeedback, the relative beta power and theta/(alpha+beta) (TBAR) are also equally significant and informative for attention detection. Interestingly, we found that the relative theta power (which is also commonly used) may not have sufficient discriminative information itself (it is informative only for 3.26% of ADHD children). We have also demonstrated that although these features (relative beta power, TBR and TBAR) are the most important measures to detect attention on average, different subjects have different set of most discriminative features.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.