SummaryResearch in neuroscience increasingly relies on the mouse, a mammalian species that affords unparalleled genetic tractability and brain atlases. Here, we introduce high-yield methods for probing mouse visual decisions. Mice are head-fixed, facilitating repeatable visual stimulation, eye tracking, and brain access. They turn a steering wheel to make two alternative choices, forced or unforced. Learning is rapid thanks to intuitive coupling of stimuli to wheel position. The mouse decisions deliver high-quality psychometric curves for detection and discrimination and conform to the predictions of a simple probabilistic observer model. The task is readily paired with two-photon imaging of cortical activity. Optogenetic inactivation reveals that the task requires mice to use their visual cortex. Mice are motivated to perform the task by fluid reward or optogenetic stimulation of dopamine neurons. This stimulation elicits a larger number of trials and faster learning. These methods provide a platform to accurately probe mouse vision and its neural basis.
Progress in science requires standardized assays whose results can be readily shared, compared, and reproduced across laboratories. Reproducibility, however, has been a concern in neuroscience, particularly for measurements of mouse behavior. Here, we show that a standardized task to probe decision-making in mice produces reproducible results across multiple laboratories. We adopted a task for head-fixed mice that assays perceptual and value-based decision making, and we standardized training protocol and experimental hardware, software, and procedures. We trained 140 mice across seven laboratories in three countries, and we collected 5 million mouse choices into a publicly available database. Learning speed was variable across mice and laboratories, but once training was complete there were no significant differences in behavior across laboratories. Mice in different laboratories adopted similar reliance on visual stimuli, on past successes and failures, and on estimates of stimulus prior probability to guide their choices. These results reveal that a complex mouse behavior can be reproduced across multiple laboratories. They establish a standard for reproducible rodent behavior, and provide an unprecedented dataset and open-access tools to study decision-making in mice. More generally, they indicate a path toward achieving reproducibility in neuroscience through collaborative open-science approaches.
In primate retina, "red-green" color coding is initiated when signals originating in long (L) and middle (M) wavelength-sensitive cone photoreceptors interact antagonistically. The center-surround receptive field of "midget" ganglion cells provides the neural substrate for L versus M cone-opponent interaction, but the underlying circuitry remains unsettled, centering around the longstanding question of whether specialized cone wiring is present. To address this question, we measured the strength, sign, and spatial tuning of L- and M-cone input to midget receptive fields in the peripheral retina of macaque primates of either sex. Consistent with previous work, cone opponency arose when one of the cone types showed a stronger connection to the receptive field center than to the surround. We implemented a difference-of-Gaussians spatial receptive field model, incorporating known biology of the midget circuit, to test whether physiological responses we observed in real cells could be captured entirely by anatomical nonselectivity. When this model sampled nonselectively from a realistic cone mosaic, it accurately reproduced key features of a cone-opponent receptive field structure, and predicted both the variability and strength of cone opponency across the retina. The model introduced here is consistent with abundant anatomical evidence for nonselective wiring, explains both local and global properties of the midget population, and supports a role in their multiplexing of spatial and color information. It provides a neural basis for human chromatic sensitivity across the visual field, as well as the maintenance of normal color vision despite significant variability in the relative number of L and M cones across individuals. Red-green color vision is a hallmark of the human and nonhuman primate that starts in the retina with the presence of long (L)- and middle (M)-wavelength sensitive cone photoreceptor types. Understanding the underlying retinal mechanism for color opponency has focused on the broad question of whether this characteristic can emerge from nonselective wiring, or whether complex cone-type-specific wiring must be invoked. We provide experimental and modeling support for the hypothesis that nonselective connectivity is sufficient to produce the range of red-green color opponency observed in midget ganglion cells across the retina. Our nonselective model reproduces the diversity of physiological responses of midget cells while also accounting for systematic changes in color sensitivity across the visual field.
Progress in neuroscience is hindered by poor reproducibility of mouse behavior. Here we show that in a visual decision making task, reproducibility can be achieved by automating the training protocol and by standardizing experimental hardware, software, and procedures. We trained 101 mice in this task across seven laboratories at six different research institutions in three countries, and obtained 3 million mouse choices. In trained mice, variability in behavior between labs was indistinguishable from variability within labs. Psychometric curves showed no significant differences in visual threshold, bias, or lapse rates across labs. Moreover, mice across laboratories adopted similar strategies when stimulus location had asymmetrical probability that changed over time. We provide detailed instructions and open-source tools to set up and implement our method in other laboratories. These results establish a new standard for reproducibility of rodent behavior and provide accessible tools for the study of decision making in mice.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.