The electroencephalography (EEG) is a well-established non-invasive method in neuroscientific research and clinical diagnostics. It provides a high temporal but low spatial resolution of brain activity. To gain insight about the spatial dynamics of the EEG, one has to solve the inverse problem, i.e., finding the neural sources that give rise to the recorded EEG activity. The inverse problem is ill-posed, which means that more than one configuration of neural sources can evoke one and the same distribution of EEG activity on the scalp. Artificial neural networks have been previously used successfully to find either one or two dipole sources. These approaches, however, have never solved the inverse problem in a distributed dipole model with more than two dipole sources. We present ConvDip, a novel convolutional neural network (CNN) architecture, that solves the EEG inverse problem in a distributed dipole model based on simulated EEG data. We show that (1) ConvDip learned to produce inverse solutions from a single time point of EEG data and (2) outperforms state-of-the-art methods on all focused performance measures. (3) It is more flexible when dealing with varying number of sources, produces less ghost sources and misses less real sources than the comparison methods. It produces plausible inverse solutions for real EEG recordings from human participants. (4) The trained network needs <40 ms for a single prediction. Our results qualify ConvDip as an efficient and easy-to-apply novel method for source localization in EEG data, with high relevance for clinical applications, e.g., in epileptology and real-time applications.
The information available through our senses is noisy, incomplete, and ambiguous. Our perceptual systems have to resolve this ambiguity to construct stable and reliable percepts. Previous EEG studies found large amplitude differences in two event-related potential (ERP) components 200 and 400 ms after stimulus onset when comparing ambiguous with disambiguated visual information ("ERP Ambiguity Effects"). These effects so far generalized across classical ambiguous figures from different visual categories at lower (geometry, motion) and intermediate (Gestalt perception) levels. The present study aimed to examine whether these ERP Effects are restricted to ambiguous figures or whether they also occur for different degrees of visibility. Smiley faces with low and high visibility of emotional expressions, as well as abstract figures with low and high visibility of a target curvature were presented. We thus compared ambiguity effects in geometric cube stimuli with visibility in emotional faces, and with visibility in abstract figures. ERP Effects were replicated for the geometric stimuli and very similar ERP Effects were found for stimuli with emotional face expressions but also for abstract figures. Conclusively, the ERP amplitude effects generalize across fundamentally different stimulus categories and show highly similar effects for different degrees of stimulus ambiguity and stimulus visibility. We postulate the existence of a high-level/meta-perceptual evaluation instance, beyond sensory details, that estimates the certainty of a perceptual decision. The ERP Effects may reflect differences in evaluation results.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.