We examined the responses of neurons in posterior parietal area 7a to passive rotational and translational self-motion stimuli, while systematically varying the speed of visually simulated (optic flow cues) or actual (vestibular cues) self-motion. Contrary to a general belief that responses in area 7a are predominantly visual, we found evidence for a vestibular dominance in self-motion processing. Only a small fraction of neurons showed multisensory convergence of visual/vestibular and linear/angular self-motion cues. These findings suggest possibly independent neuronal population codes for visual versus vestibular and linear versus angular self-motion. Neural responses scaled with self-motion magnitude (i.e., speed) but temporal dynamics were diverse across the population. Analyses of laminar recordings showed a strong distance-dependent decrease for correlations in stimulus-induced (signal correlation) and stimulus-independent (noise correlation) components of spike-count variability, supporting the notion that neurons are spatially clustered with respect to their sensory representation of motion. Single-unit and multiunit response patterns were also correlated, but no other systematic dependencies on cortical layers or columns were observed. These findings describe a likely independent multimodal neural code for linear and angular self-motion in a posterior parietal area of the macaque brain that is connected to the hippocampal formation.
Saccade adaptation is a cerebellar-mediated type of motor learning in which the oculomotor system is exposed to repetitive errors. Different types of saccade adaptations are thought to involve distinct underlying cerebellar mechanisms. Transcranial direct current stimulation (tDCS) induces changes in neuronal excitability in a polarity-specific manner and offers a modulatory, noninvasive, functional insight into the learning aspects of different brain regions. We aimed to modulate the cerebellar influence on saccade gains during adaptation using tDCS. Subjects performed an inward (n = 10) or outward (n = 10) saccade adaptation experiment (25% intrasaccadic target step) while receiving 1.5 mA of anodal cerebellar tDCS delivered by a small contact electrode. Compared to sham stimulation, tDCS increased learning of saccadic inward adaptation but did not affect learning of outward adaptation. This may imply that plasticity mechanisms in the cerebellum are different between inward and outward adaptation. TDCS could have influenced specific cerebellar areas that contribute to inward but not outward adaptation. We conclude that tDCS can be used as a neuromodulatory technique to alter cerebellar oculomotor output, arguably by engaging wider cerebellar areas and increasing the available resources for learning.
13To take the best actions, we often need to maintain and update beliefs about variables that cannot be directly 14 observed. To understand the principles underlying such belief updates, we need tools to uncover subjects' 15 belief dynamics from natural behaviour. We tested whether eye movements could be used to infer subjects' 16 beliefs about latent variables using a naturalistic, visuomotor navigation task. We observed eye movements 17 that appeared to continuously track the goal location even when no visible target was present there. Accurate 18 goal-tracking was associated with improved task performance, and inhibiting eye movements in humans 19 impaired navigation precision. By using passive stimulus playback and manipulating stimulus reliability, we 20 show that subjects' eye movements are likely voluntary, rather than reflexive. These results suggest that 21 gaze dynamics play a key role in action-selection during challenging visuomotor behaviours, and may 22 possibly serve as a window into the subject's dynamically evolving internal beliefs. 23 88 the range of target distances and the duration for which the target was visible (see Methods). All subjects 89 were head-fixed, and we recorded each subject's movement trajectory (Fig 1Dmiddle) as well as eye 90 position (Fig 1Dright) throughout each trial. 91Behavioural performance 92 Figure 1E shows the performance of the monkeys in this task. Both radial distance (Fig 1E -left) and 93 angular eccentricity (Fig 1E -right) of the monkeys' responses (stopping location) were highly correlated 94 with the target location across trials ( = 3 monkeys, Pearson's ± standard deviation, radial distance: 95 0.72 ± 0.1, angle: 0.84 ± 0.1) suggesting that their behaviour was appropriate for the task. To test whether 96 their performance was accurate, we regressed their responses against target locations. The slope of the 97 regression was close to unity both for radial distance (mean ± standard deviation = 0.92 ± 0.06) and angle 98 (0.98 ± 0.1) suggesting that the monkeys were nearly unbiased (Fig 1F -green). We did notice modest 99 undershooting for distant targets, an effect that is likely due to growing position uncertainty described in 100 previous work (Lakshminarasimhan et al., 2018). 101 We showed previously that humans are systematically biased when performing this task without feedback, 102 and that the bias was likely due to prior expectations that make them underestimate their movement 103 velocities (Lakshminarasimhan et al., 2018). Consistent with those findings, human subjects overshot the 104 target in an initial block of trials in which no feedback was provided (Fig S1C; = 5, mean slope ± standard deviation, radial distance: 1.21 ± 0.2, angle: 1.78 ± 0.3), to a degree that was proportional to 106 target distance. With feedback, however, the same subjects quickly adapted their responses to produce 107 nearly unbiased performance (Fig 1F -purple, see Fig S1D for individual trials; mean slope ± standard 108 deviation, radial distance: 0.95 ± 0.1, angle:...
Sensory evidence accumulation is considered a hallmark of decision-making in noisy environments. Integration of sensory inputs has been traditionally studied using passive stimuli, segregating perception from action. Lessons learned from this approach, however, may not generalize to ethological behaviors like navigation, where there is an active interplay between perception and action. We designed a sensory-based sequential decision task in virtual reality in which humans and monkeys navigated to a memorized location by integrating optic flow generated by their own joystick movements. A major challenge in such closed-loop tasks is that subjects' actions will determine future sensory input, causing ambiguity about whether they rely on sensory input rather than expectations based solely on a learned model of the dynamics. To test whether subjects integrated optic flow over time, we used three independent experimental manipulations, unpredictable optic flow perturbations, which pushed subjects off their trajectory; gain manipulation of the joystick controller, which changed the consequences of actions; and manipulation of the optic flow density, which changed the information borne by sensory evidence. Our results suggest that both macaques (male) and humans (female/male) relied heavily on optic flow, thereby demonstrating a critical role for sensory evidence accumulation during naturalistic action-perception closed-loop tasks. SIGNIFICANCE STATEMENT The temporal integration of evidence is a fundamental component of mammalian intelligence. Yet, it has traditionally been studied using experimental paradigms that fail to capture the closed-loop interaction between actions and sensations inherent in real-world continuous behaviors. These conventional paradigms use binary decision tasks and passive stimuli with statistics that remain stationary over time. Instead, we developed a naturalistic visuomotor visual navigation paradigm that mimics the causal structure of real-world sensorimotor interactions and probed the extent to which participants integrate sensory evidence by adding task manipulations that reveal complementary aspects of the computation.
We do not understand how neural nodes operate and coordinate within the recurrent action-perception loops that characterize naturalistic self-environment interactions. Here, we record single-unit spiking activity and local field potentials (LFPs) simultaneously from the dorsomedial superior temporal area (MSTd), parietal area 7a, and dorsolateral prefrontal cortex (dlPFC) as monkeys navigate in virtual reality to 'catch fireflies'. This task requires animals to actively sample from a closed-loop virtual environment while concurrently computing continuous latent variables: (i) the distance and angle travelled (i.e., path integration) and (ii) the distance and angle to a memorized firefly location (i.e., a hidden spatial goal). We observed a patterned mixed selectivity, with the prefrontal cortex most prominently coding for latent variables, parietal cortex coding for sensorimotor variables, and MSTd most often coding for eye-movements. However, even the traditionally considered sensory area (i.e., MSTd) tracked latent variables, demonstrating path integration and vector-coding of hidden spatial goals. Further, global encoding profiles and unit-to-unit coupling (i.e., noise correlations) suggested a functional subnetwork composed by MSTd and dlPFC, and not between these and 7a, as anatomy would suggest. We show that the greater the unit-to-unit coupling between MSTd and dlPFC, the more the animals' gaze position was indicative of the ongoing location of the hidden spatial goal. We suggest this MSTd-dlPFC subnetwork reflects the monkeys' natural and adaptive task strategy wherein they continuously gaze toward the location of the (invisible) target. Together, these results highlight the distributed nature of neural coding during closed action-perception loops and suggest that fine-grain functional subnetworks may be dynamically established to subserve (embodied) task strategies.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.