Despite strong evidence to the contrary in the literature, microsaccades are overwhelmingly described as involuntary eye movements. Here we show in both human subjects and monkeys that individual microsaccades of any direction can easily be triggered: (1) on demand, based on an arbitrary instruction, (2) without any special training, (3) without visual guidance by a stimulus, and (4) in a spatially and temporally accurate manner. Subjects voluntarily generated instructed “memory-guided” microsaccades readily, and similarly to how they made normal visually-guided ones. In two monkeys, we also observed midbrain superior colliculus neurons that exhibit movement-related activity bursts exclusively for memory-guided microsaccades, but not for similarly-sized visually-guided movements. Our results demonstrate behavioral and neural evidence for voluntary control over individual microsaccades, supporting recently discovered functional contributions of individual microsaccade generation to visual performance alterations and covert visual selection, as well as observations that microsaccades optimize eye position during high acuity visually-guided behavior.
Graphical Abstract Highlights d We simulate ripples with two compartment models in a CA3-CA1 hippocampal network d Simulated ripples emerge in CA1 due to excitation paced by recurrent inhibition d They exhibit slow-gamma band CA3-CA1 coordination relayed by CA1 feedback inhibition d CA1 feedback inhibition is also key to control cell participation and sequence replay In Brief Hippocampus replays mnemonic representations during the so-called ripple oscillations. Ramirez-Villegas et al. show in a biophysically realistic model how the content and temporal organization of representations are coordinated by recurrent interactions between pyramidal and inhibitory neurons, as well as gamma oscillations. SUMMARYHippocampal ripple oscillations likely support reactivation of memory traces that manifest themselves as temporally organized spiking of sparse neuronal ensembles. However, the network mechanisms concurring to achieve this function are largely unknown. We designed a multi-compartmental model of the CA3-CA1 subfields to generate biophysically realistic ripple dynamics from the cellular level to local field potentials. Simulations broadly parallel in vivo observations and support that ripples emerge from CA1 pyramidal spiking paced by recurrent inhibition. In addition to ripple oscillations, key coordination mechanisms involve concomitant aspects of network activity. Recurrent synaptic interactions in CA1 exhibit slow-gamma band coherence with CA3 input, thus offering a way to coordinate CA1 activities with CA3 inducers. Moreover, CA1 feedback inhibition controls the content of spontaneous replay during CA1 ripples, forming new mnemonic representations through plasticity. These insights are consistent with slow-gamma interactions and interneuronal circuit plasticity observed in vivo, suggesting a multifaceted ripple-related replay phenomenon. EXPERIMENTAL MODEL AND SUBJECT DETAILSFour male rhesus monkeys (Macaca Mulatta), aged 5-9 years were used in this study. MRI-compatible head holders and chambers were made out of PEEK (polyether etherketone; TecaPEEK, Ensinger, Nufringen, Germany), and implanted stereotaxically on the cranium of four monkeys using standard clinical aseptic techniques. Implants were secured with custom-made ceramic screws (zirconium oxide; Pfannenstiel, Germany). Postoperatively, animals were placed in large, specially designed recovery chairs for 3 days, during which they were taken for walks by the animal caretakers 2 to 3 times per day. The chairs allowed the animals to freely move body and hands, but prevented them from touching the implants. As a prophylactic measure, antibiotics (enrofloxacin; Baytril) and analgesics (flunixin; Finadyne) were administered for 5 days. All surgical procedures were carried out under general balanced anesthesia, whose induction and maintenance was done by trained and qualified personnel. Detailed descriptions of our procedures can be also found in the website of our institute (http://www.hirnforschung.kyb.mpg.de/en/homepage.html). All experimental and surgical...
Deep neural networks (DNN) have set new standards at predicting responses of neural populations to visual input. Most such DNNs consist of a convolutional network (core) shared across all neurons which learns a representation of neural computation in visual cortex and a neuron-specific readout that linearly combines the relevant features in this representation. The goal of this paper is to test whether such a representation is indeed generally characteristic for visual cortex, i.e. generalizes between animals of a species, and what factors contribute to obtaining such a generalizing core. To push all non-linear computations into the core where the generalizing cortical features should be learned, we devise a novel readout that reduces the number of parameters per neuron in the readout by up to two orders of magnitude compared to the previous state-of-the-art. It does so by taking advantage of retinotopy and learns a Gaussian distribution over the neuron’s receptive field position. With this new readout we train our network on neural responses from mouse primary visual cortex (V1) and obtain a gain in performance of 7% compared to the previous state-of-the-art network. We then investigate whether the convolutional core indeed captures general cortical features by using the core in transfer learning to a different animal. When transferring a core trained on thousands of neurons from various animals and scans we exceed the performance of training directly on that animal by 12%, and outperform a commonly used VGG16 core pre-trained on imagenet by 33%. In addition, transfer learning with our data-driven core is more data-efficient than direct training, achieving the same performance with only 40% of the data. Our model with its novel readout thus sets a new state-of-the-art for neural response prediction in mouse visual cortex from natural images, generalizes between animals, and captures better characteristic cortical features than current task-driven pre-training approaches such as VGG16.
Responses to natural stimuli in area V4, a mid-level area of the visual ventral stream, are well predicted by features from convolutional neural networks (CNNs) trained on image classification. This result has been taken as evidence for the functional role of V4 in object classification. However, we currently do not know if and to what extent V4 plays a role in solving other computational objectives. Here, we investigated normative accounts of V4 by predicting macaque single-neuron responses to natural images from the representations extracted by 23 CNNs trained on different computer vision tasks including semantic, geometric, 2D, and 3D visual tasks. We found that semantic classification tasks do indeed provide the best predictive features for V4. Other tasks (3D in particular) followed very closely in performance, but a similar pattern of tasks performance emerged when predicting the activations of a network exclusively trained on object recognition. Thus, our results support V4's main functional role in semantic processing. At the same time, they suggest that V4's affinity to various 3D and 2D stimulus features found by electrophysiologists could be a corollary of a semantic functional goal.
Across animal species, sensory processing dynamically adapts to behavioral context. In the mammalian visual system, sensory neural responses and behavioral performance increase during an active behavioral state characterized by locomotion activity and pupil dilation, whereas preferred stimuli of individual neurons typically remain unchanged. Here, we address how behavioral states modulate stimulus selectivity in the context of colored natural scenes using a combination of large-scale population imaging, behavior, pharmacology, and deep neural network modeling. In visual cortex of awake mice, we identified a consistent shift of individual neuron color preferences towards ultraviolet stimuli during active behavioral periods that was particularly pronounced in the upper visual field. We found that the spectral shift in neural tuning is mediated by pupil dilation, resulting in a dynamic switch from rod- to cone-driven visual responses for constant ambient light levels. We further showed that this shift selectively enhances the discriminability of ultraviolet objects and facilitates the detection of ethological stimuli, such as aerial predators against the ultraviolet background of the twilight sky. Our results suggest a novel functional role for pupil dilation during active behavioral states as a bottom-up mechanism that, together with top-down neuromodulatory mechanisms, dynamically tunes visual representations to different behavioral demands.
The foveal visual image region provides the human visual system with the highest acuity. However, it is unclear whether such a high fidelity representational advantage is maintained when foveal image locations are committed to short-term memory. Here, we describe a paradoxically large distortion in foveal target location recall by humans. We briefly presented small, but high contrast, points of light at eccentricities ranging from 0.1 to 12°, while subjects maintained their line of sight on a stable target. After a brief memory period, the subjects indicated the remembered target locations via computer controlled cursors. The biggest localization errors, in terms of both directional deviations and amplitude percentage overshoots or undershoots, occurred for the most foveal targets, and such distortions were still present, albeit with qualitatively different patterns, when subjects shifted their gaze to indicate the remembered target locations. Foveal visual images are severely distorted in short-term memory.
The neural underpinning of the biological visual system is challenging to study experimentally, in particular as the neuronal activity becomes increasingly nonlinear with respect to visual input. Artificial neural networks (ANNs) can serve a variety of goals for improving our understanding of this complex system, not only serving as predictive digital twins of sensory cortex for novel hypothesis generation in silico, but also incorporating bio-inspired architectural motifs to progressively bridge the gap between biological and machine vision. The mouse has recently emerged as a popular model system to study visual information processing, but no standardized large-scale benchmark to identify state-of-the-art models of the mouse visual system has been established.To fill this gap, we propose the SENSORIUM benchmark competition. We collected a large-scale dataset from mouse primary visual cortex containing the responses of more than 28,000 neurons across seven mice stimulated with thousands of natural images, together with simultaneous behavioral measurements that include running speed, pupil dilation, and eye movements. The benchmark challenge will rank models based on predictive performance for neuronal responses on a held-out test set, and includes two tracks for model input limited to either stimulus only (SENSORIUM) or stimulus plus behavior (SENSORIUM+). We provide a starting kit to lower the barrier for entry, including tutorials, pretrained baseline models, and APIs with one line commands for data loading and submission. We would like to see this as a starting point for regular challenges and data releases, and as a standard tool for measuring progress in large-scale neural system identification models of the mouse visual system and beyond.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.