Background Non-invasive brain–computer interfaces (BCIs) have been developed for realizing natural bi-directional interaction between users and external robotic systems. However, the communication between users and BCI systems through artificial matching is a critical issue. Recently, BCIs have been developed to adopt intuitive decoding, which is the key to solving several problems such as a small number of classes and manually matching BCI commands with device control. Unfortunately, the advances in this area have been slow owing to the lack of large and uniform datasets. This study provides a large intuitive dataset for 11 different upper extremity movement tasks obtained during multiple recording sessions. The dataset includes 60-channel electroencephalography, 7-channel electromyography, and 4-channel electro-oculography of 25 healthy participants collected over 3-day sessions for a total of 82,500 trials across all the participants. Findings We validated our dataset via neurophysiological analysis. We observed clear sensorimotor de-/activation and spatial distribution related to real-movement and motor imagery, respectively. Furthermore, we demonstrated the consistency of the dataset by evaluating the classification performance of each session using a baseline machine learning method. Conclusions The dataset includes the data of multiple recording sessions, various classes within the single upper extremity, and multimodal signals. This work can be used to (i) compare the brain activities associated with real movement and imagination, (ii) improve the decoding performance, and (iii) analyze the differences among recording sessions. Hence, this study, as a Data Note, has focused on collecting data required for further advances in the BCI technology.
Pumpkin is a promising alternative source for pectin material. Pumpkin pectin has a unique chemical structure and physical properties, presumably providing different functional properties compared to conventional commercial pectin sources. Depending on the conditions to produce pumpkin pectin, diverse molecular structures can be obtained and utilized in various food applications.
Recent advances in brain-computer interface (BCI) techniques have led to increasingly refined interactions between users and external devices. Accurately decoding kinematic information from brain signals is one of the main challenges encountered in the control of human-like robots. In particular, although the forearm of an upper extremity is frequently used in daily life for high-level tasks, only few studies addressed decoding of the forearm movement. In this study, we focus on the classification of forearm movements according to elaborated rotation angles using electroencephalogram (EEG) signals. To this end, we propose a hierarchical flow convolutional neural network (HF-CNN) model for robust classification. We evaluate the proposed model not only with our experimental dataset but also with a public dataset (BNCI Horizon 2020). The grand-average classification accuracies of three rotation angles yield 0.73 (±0.04) for the motor execution (ME) task and 0.65 (±0.09) for the motor imagery (MI) task across ten subjects in our experimental dataset. Further, in the public dataset, the grand-averaged classification accuracies were 0.52 (±0.03) for ME and 0.51 (±0.04) for MI tasks across fifteen subjects. Our experimental results demonstrate the possibility of decoding complex kinematics information using EEG signals. This study will contribute to the development of a brain-controlled robotic arm system capable of performing high-level tasks. INDEX TERMS Brain-computer interface (BCI), electroencephalogram (EEG), convolutional neural network (CNN), forearm motor execution and motor imagery.
A brain-computer interface (BCI) provides a direct communication pathway between user and external devices. Motor imagery (MI) paradigm is widely used in non-invasive BCI to control external devices by decoding user intentions. The traditional MI-BCI problem is to obtain enough EEG data samples for adopting deep learning techniques, as electroencephalography (EEG) data have intricate and non-stationary properties that can cause a discrepancy between different sessions of data. Because of the discrepancy, the recorded EEG data with different sessions cannot be treated as the same. In this study, we recorded a large intuitive EEG dataset that contained nine types of movements of a single-arm across 12 subjects. We proposed a SessionNet that learns generality with EEG data recorded over multiple sessions using feature similarity to improve classification performance. Additionally, the SessionNet adopts the principle of a hierarchical convolutional neural network that shows robust classification performance regardless of the number of classes. The SessionNet outperforms conventional methods on 3-class, 5-class, and two types of 7-class and 9-class of a single-arm task. Hence, our approach could demonstrate the possibility of using feature similarity based on a novel ensemble learning method to train generality from multiple session data for better MI classification performance.INDEX TERMS Brain-computer interface (BCI), electroencephalogram (EEG), motor imagery (MI), convolutional neural network (CNN), weighted ensemble learning.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.