An important subfield of brain–computer interface is the classification of motor imagery (MI) signals where a presumed action, for example, imagining the hands' motions, is mentally simulated. The brain dynamics of MI is usually measured by electroencephalography (EEG) due to its noninvasiveness. The next generation of brain–computer interface systems can benefit from the generative deep learning (GDL) models by providing end‐to‐end (e2e) machine learning and increasing their accuracy. In this study, to exploit the e2e‐property of deep learning models, a novel GDL methodology is proposed where only minimal objective‐free preprocessing steps are needed. Furthermore, to deal with the complicated multi‐class MI–EEG signals, an innovative multilevel GDL‐based classifying scheme is proposed. The effectiveness of the proposed model and its robustness against noisy MI–EEG signals is evaluated using two different GDL models, that is, deep belief network and stacked sparse autoencoder in e2e manner. Experimental results demonstrate the effectiveness of the proposed methodology with improved accuracy compared with the widely used filter bank common spatial patterns algorithm.
Face inpainting is a challenging task aiming to fill the damaged or masked regions in face images with plausibly synthesized contents. Based on the given information, the reconstructed regions should look realistic and more importantly preserve the demographic and biometric properties of the individual. The aim of this paper is to reconstruct the face based on the periocular region (eyes-to-face). To do this, we proposed a novel GAN-based deep learning model called Eyes-to-Face GAN (E2F-GAN) which includes two main modules: a coarse module and a refinement module. The coarse module along with an edge predictor module attempts to extract all required features from a periocular region and to generate a coarse output which will be refined by a refinement module. Additionally, a dataset of eyes-to-face synthesis has been generated based on the public face dataset called CelebA-HQ for training and testing. Thus, we perform both qualitative and quantitative evaluations on the generated dataset. Experimental results demonstrate that our method outperforms previous learning-based face inpainting methods and generates realistic and semantically plausible images. We also provide the implementation of the proposed approach to support reproducible research via (https://github.com/amiretefaghi/E2F-GAN).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.