SummarySpatial learning requires estimates of location that may be obtained by path integration or from positional cues. Grid and other spatial firing patterns of neurons in the superficial medial entorhinal cortex (MEC) suggest roles in behavioral estimation of location. However, distinguishing the contributions of path integration and cue-based signals to spatial behaviors is challenging, and the roles of identified MEC neurons are unclear. We use virtual reality to dissociate linear path integration from other strategies for behavioral estimation of location. We find that mice learn to path integrate using motor-related self-motion signals, with accuracy that decreases steeply as a function of distance. We show that inactivation of stellate cells in superficial MEC impairs spatial learning in virtual reality and in a real world object location recognition task. Our results quantify contributions of path integration to behavior and corroborate key predictions of models in which stellate cells contribute to location estimation.
8The process by which visual information is incorporated into the brain's spatial framework 9to represent landmarks is poorly understood. Studies in humans and rodents suggest that 10 retrosplenial cortex (RSC) plays a key role in these computations. We developed an RSC-11 dependent behavioral task in which head-fixed mice learned the spatial relationship 12 between visual landmark cues and hidden reward locations. Two-photon imaging 13revealed that these cues served as dominant reference points for most task-active 14neurons and anchored the spatial code in RSC. Presenting the same environment but 15 decoupled from mouse behavior degraded encoding fidelity. Analyzing visual and motor 16responses showed that landmark codes were the result of supralinear integration. 17Surprisingly, V1 axons recorded in RSC showed similar receptive fields. However, they 18 were less modulated by task engagement, indicating that landmark representations in 19RSC are the result of local computations. Our data provide cellular-and network-level 20insight into how RSC represents landmarks. 21 22
SummaryThe integration of visual stimuli and motor feedback is critical for successful visually guided navigation. These signals have been shown to shape neuronal activity in the primary visual cortex (V1), in an experience-dependent manner. Here, we examined whether visual, reward, and self-motion-related inputs are integrated in order to encode behaviorally relevant locations in V1 neurons. Using a behavioral task in a virtual environment, we monitored layer 2/3 neuronal activity as mice learned to locate a reward along a linear corridor. With learning, a subset of neurons became responsive to the expected reward location. Without a visual cue to the reward location, both behavioral and neuronal responses relied on self-motion-derived estimations. However, when visual cues were available, both neuronal and behavioral responses were driven by visual information. Therefore, a population of V1 neurons encode behaviorally relevant spatial locations, based on either visual cues or on self-motion feedback when visual cues are absent.
The process by which visual information is incorporated into the brain’s spatial framework to represent landmarks is poorly understood. Studies in humans and rodents suggest that retrosplenial cortex (RSC) plays a key role in these computations. We developed an RSC-dependent behavioral task in which head-fixed mice learned the spatial relationship between visual landmark cues and hidden reward locations. Two-photon imaging revealed that these cues served as dominant reference points for most task-active neurons and anchored the spatial code in RSC. This encoding was more robust after task acquisition. Decoupling the virtual environment from mouse behavior degraded spatial representations and provided evidence that supralinear integration of visual and motor inputs contributes to landmark encoding. V1 axons recorded in RSC were less modulated by task engagement but showed surprisingly similar spatial tuning. Our data indicate that landmark representations in RSC are the result of local integration of visual, motor, and spatial information.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.