Event-related functional magnetic resonance imaging was used to examine activation in the posterior parietal cortex when subjects made pointing movements or saccades to the same spatial location. One region, well positioned to be homologous to the monkey parietal reach region (PRR), responded preferentially during memory-delay trials in which the subject planned to point to a specific location as compared to trials in which the subject planned to make a saccade to that same location. We therefore conclude that activation in this region is related to specific motor intent; i.e. it encodes information related to the subject's intention to make a specific movement to a particular spatial location.
We used functional magnetic resonance imaging (fMRI) to study readiness and intention signals in frontal and parietal areas that have been implicated in planning saccadic eye movements-the frontal eye fields (FEF) and intraparietal sulcus (IPS). To track fMRI signal changes correlated with readiness to act, we used an event-related design with variable gap periods between disappearance of the fixation point and appearance of the target. To track changes associated with intention, subjects were instructed before the gap period to make either a pro-saccade (look at target) or an anti-saccade (look away from target). FEF activation increased during the gap period and was higher for anti- than for pro-saccade trials. No signal increases were observed during the gap period in the IPS. Our findings suggest that within the frontoparietal networks that control saccade generation, the human FEF, but not the IPS, is critically involved in preparatory set, coding both the readiness and intention to perform a particular movement.
An anti-saccade, which is a saccade directed toward a mirror-symmetrical position in the opposite visual field relative to the visual stimulus, involves at least three separate operations: covert orienting, response suppression, and coordinate transformation. The distinction between pro- and anti-saccades can also be applied to pointing. We used fMRI to compare patterns of brain activation during pro- and anti-movements, to determine whether or not additional areas become active during the production of anti-movements. In parietal cortex, an inferior network was active during both saccades and pointing that included three foci along the intraparietal sulcus: 1) a posterior superior parietal area (pSPR), more active during the anti-tasks; 2) a middle inferior parietal area (mIPR), active only during the anti-tasks; and 3) an anterior inferior parietal area (aIPR), equally active for pro- and anti-movement. A superior parietal network was active during pointing but not saccades and included the following: 1) a medial region, active during anti- but not pro-pointing (mSPR); 2) an anterior and medial region, more active during pro-pointing (aSPR); and 3) an anterior and lateral region, equally active for pro- and anti-pointing (lSPR). In frontal cortex, areas selectively active during anti-movement were adjacent and anterior to areas that were active during both the anti- and pro-tasks, i.e., were anterior to the frontal eye field and the supplementary motor area. All saccade areas were also active during pointing. In contrast, foci in the dorsal premotor area, the anterior superior frontal region, and anterior cingulate were active during pointing but not saccades. In summary, pointing with central gaze activates a frontoparietal network that includes the saccade network. The operations required for the production of anti-movements recruited additional frontoparietal areas.
Our ability to prepare an action in advance allows us to respond to our environment quickly, accurately, and flexibly. Here, we used event-related functional MRI to measure human brain activity while subjects maintained an active state of preparedness. At the beginning of each trial, subjects were instructed to prepare a pro- or antisaccade to a visual cue that was continually present during a long and variable preparation interval, but to defer the saccade's execution until a go signal. The deferred saccade task eliminated the mnemonic component inherent in memory-guided saccade tasks and placed the emphasis entirely on advance motor preparation. During the delay while subjects were in an active state of motor preparedness, the blood oxygen level-dependent signal in the frontal cortex showed 1) a sustained elevation throughout the preparation interval; 2) a linear increase with increasing delay length; 3) a bias for contra- rather than ipsiversive movements; 4) greater activity when the specific metrics of the planned saccade were known compared with when they were not; and 5) increased activity when the saccade was directed toward an internal versus an external representation (i.e., anticue location). These findings support the hypothesis that both the human frontal and parietal cortices are involved in the spatial selection and preparation of saccades.
Variation in response latency to identical sensory stimuli has been attributed to variation in neural activity mediating preparatory set. Here we report evidence for a relationship between saccadic reaction time (SRT) and set-related brain activity measured with event-related functional magnetic resonance imaging. We measured hemodynamic activation time-courses during a preparatory "gap" period, during which no visual stimulus was present and no saccades were made. The subjects merely anticipated appearance of the target. Saccade direction and latency were recorded during scanning, and trials were sorted according to SRT. Both the frontal (FEF) and supplementary eye fields showed pre-target preparatory activity, but only in the FEF was this activity correlated with SRT. Activation in the intraparietal sulcus did not show any preparatory activity. These data provide evidence that the human FEF plays a central role in saccade initiation; pre-target activity in this region predicts both the type of eye movement (whether the subject will look toward or away from the target) and when a future saccade will occur.
Although it is obvious that vision plays a primary role in reaching and grasping objects, the sources of the visual information used in programming and controlling various aspects of these movements is still being investigated. One source of visual information is feedback relating to the characteristics of the reach itself for example, the speed and trajectory of the moving limb and the change in the posture of the hand and fingers. The present study selectively eliminated this source of visual information by blocking the subject's view of the reaching limb with an opaque barrier while still enabling subjects to view the goal object. Thus, a direct comparison was made between standard (closed-loop) and object-only (open-loop) visual-feedback conditions in a situation in which the light levels and contrast between an object and its surroundings were equivalent in both viewing conditions. Reach duration was longer with proportionate increases in both the acceleration and deceleration phases when visual feedback of the reaching limb was prevented. Maximum grip aperture and the proportion of movement time at which it occurred were the same in both conditions. Thus, in contrast to previous studies that did not employ constant light levels across closed- and open-loop reaching conditions, a dissociation was found between the spatial and temporal dimensions of grip formation. It appears that the posture of the hand can be programmed without visual feedback of the hand--presumably via a combination of visual information about the goal object and proprioceptive feedback (and/or efference copy). Nevertheless, maximum grip aperture (like the kinematic markers examined in the transport component) was also delayed when visual feedback of the reaching limb was selectively prevented. In other words, the relative timing of kinematic events was essentially unchanged, reflecting perhaps a tight coupling between the transport and grip components.
Despite significant recent progress in the area of Brain-Computer Interface (BCI), there are numerous shortcomings associated with collecting Electroencephalography (EEG) signals in real-world environments. These include, but are not limited to, subject and session data variance, long and arduous calibration processes and predictive generalisation issues across different subjects or sessions. This implies that many downstream applications, including Steady State Visual Evoked Potential (SSVEP) based classification systems, can suffer from a shortage of reliable data. Generating meaningful and realistic synthetic data can therefore be of significant value in circumventing this problem. We explore the use of modern neural-based generative models trained on a limited quantity of EEG data collected from different subjects to generate supplementary synthetic EEG signal vectors, subsequently utilised to train an SSVEP classifier. Extensive experimental analysis demonstrates the efficacy of our generated data, leading to improvements across a variety of evaluations, with the crucial task of cross-subject generalisation improving by over 35% with the use of such synthetic data.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.