Mental tasks classification is increasingly recognized as a major challenge in the field of EEG signal processing and analysis. State-of-the-art approaches face the issue of spatially unstable structure of highly noised EEG signals. To address this problem, this paper presents a multi-channel convolutional neural network architecture with adaptively optimized parameters. Our solution outperforms alternative methods in terms of classification accuracy of mental tasks (imagination of hand movements and speech sounds generation) while providing high generalization capability (∼5%). Classification efficiency was obtained by using a frequency-domain multi-channel neural network feeding scheme by EEG signal frequency sub-bands analysis and architecture supporting feature mapping with two subsequent convolutional layers terminated with a fully connected layer. For dataset V from BCI Competition III, the method achieved an average classification accuracy level of nearly 70%, outperforming alternative methods. The solution presented applies a frequency domain for input data processed by a multi-channel architecture that isolates frequency sub-bands in time windows, which enables multi-class signal classification that is highly generalizable and more accurate (∼1.2%) than the existing solutions. Such an approach, combined with an appropriate learning strategy and parameters optimization, adapted to signal characteristics, outperforms reference single- or multi-channel networks, such as AlexNet, VGG-16 and Cecotti’s multi-channel NN. With the classification accuracy improvement of 1.2%, our solution is a clear advance as compared to the top three state-of-the-art methods, which achieved the result of no more than 0.3%.
Purpose
Technological innovation has made it possible to review how a film cues particular reactions on the part of the viewers. The purpose of this paper is to capture and interpret visual perception and attention by the simultaneous use of eye tracking and electroencephalography (EEG) technologies.
Design/methodology/approach
The authors have developed a method for joint analysis of EEG and eye tracking. To achieve this goal, an algorithm was implemented to capture and interpret visual perception and attention by the simultaneous use of eye tracking and EEG technologies. All parameters have been measured as a function of the relationship between the tested signals, which, in turn, allowed for a more accurate validation of hypotheses by appropriately selected calculations.
Findings
The results of this study revealed a coherence between EEG and eye tracking that are of particular relevance for human perception.
Practical implications
This paper endeavors both to capture and interpret visual perception and attention by the simultaneous use of eye tracking and EEG technologies. Eye tracking provides a powerful real-time measure of viewer region of interest. EEG technologies provides data regarding the viewer’s emotional states while watching the movie.
Originality/value
The approach in this paper is distinct from similar studies because it takes into account the integration of the eye tracking and EEG technologies. This paper provides a method for determining a fully functional video introspection system.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.