We present a fast and accurate non-invasive brain-machine interface (BMI) based on demodulating steady-state visual evoked potentials (SSVEPs) in electroencephalography (EEG). Our study reports an SSVEP-BMI that, for the first time, decodes primarily based on top-down and not bottom-up visual information processing. The experimental setup presents a grid-shaped flickering line array that the participants observe while intentionally attending to a subset of flickering lines representing the shape of a letter. While the flickering pixels stimulate the participant’s visual cortex uniformly with equal probability, the participant’s intention groups the strokes and thus perceives a ‘letter Gestalt’. We observed decoding accuracy of 35.81% (up to 65.83%) with a regularized linear discriminant analysis; on average 2.05-fold, and up to 3.77-fold greater than chance levels in multi-class classification. Compared to the EEG signals, an electrooculogram (EOG) did not significantly contribute to decoding accuracies. Further analysis reveals that the top-down SSVEP paradigm shows the most focalised activation pattern around occipital visual areas; Granger causality analysis consistently revealed prefrontal top-down control over early visual processing. Taken together, the present paradigm provides the first neurophysiological evidence for the top-down SSVEP BMI paradigm, which potentially enables multi-class intentional control of EEG-BMIs without using gaze-shifting.
Even though reciprocal inhibitory vestibular interactions following visual stimulation have been understood as sensory-reweighting mechanisms to stabilize motion perception; this hypothesis has not been thoroughly investigated with temporal dynamic measurements. Recently, virtual reality technology has been implemented in different medical domains. However, exposure in virtual reality environments can cause discomfort, including nausea or headache, due to visual-vestibular conflicts. We speculated that self-motion perception could be altered by accelerative visual motion stimulation in the virtual reality situation because of the absence of vestibular signals (visual-vestibular sensory conflict), which could result in the sickness. The current study investigated spatio-temporal profiles for motion perception using immersive virtual reality. We demonstrated alterations in neural dynamics under the sensory mismatch condition (accelerative visual motion stimulation) and in participants with high levels of sickness after driving simulation. Additionally, an event-related potentials study revealed that the high-sickness group presented with higher P3 amplitudes in sensory mismatch conditions, suggesting that it would be a substantial demand of cognitive resources for motion perception on sensory mismatch conditions.
In this study, we hypothesized that top-down sensory prediction error due to peripheral hearing loss might influence sensorimotor integration using the efference copy (EC) signals as functional connections between auditory and motor brain areas. Using neurophysiological methods, we demonstrated that the auditory responses to self-generated sound were not suppressed in a group of patients with tinnitus accompanied by significant hearing impairment and in a schizophrenia group. However, the response was attenuated in a group with tinnitus accompanied by mild hearing impairment, similar to a healthy control group. The bias of attentional networks to self-generated sound was also observed in the subjects with tinnitus with significant hearing impairment compared to those with mild hearing impairment and healthy subjects, but it did not reach the notable disintegration found in those in the schizophrenia group. Even though the present study had significant constraints in that we did not include hearing loss subjects without tinnitus, these results might suggest that auditory deafferentation (hearing loss) may influence sensorimotor integration process using EC signals. However, the impaired sensorimotor integration in subjects with tinnitus with significant hearing impairment may have resulted from aberrant auditory signals due to sensory loss, not fundamental deficits in the reafference system, as the auditory attention network to self-generated sound is relatively well preserved in these subjects.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.