Highlights d A generative deep neural network and a genetic algorithm evolved images guided by neuronal firing d Evolved images maximized neuronal firing in alert macaque visual cortex d Evolved images activated neurons more than large numbers of natural images d Similarity to evolved images predicts response of neurons to novel images
How we perceive the world as stable despite the frequent disruptions of the retinal image caused by eye movements is one of the fundamental questions in sensory neuroscience. Seemingly convergent evidence points towards a mechanism which dynamically updates representations of visual space in anticipation of a movement (Wurtz, 2008). In particular, receptive fields (RFs) of neurons, predominantly within oculomotor and attention related brain structures (Duhamel et al., 1992; Walker et al., 1995; Umeno and Goldberg, 1997), are thought to “remap” to their future, post-movement location prior to an impending eye movement. New studies (Neupane et al., 2016a,b) report observations on RF dynamics at the time of eye movements of neurons in area V4. These dynamics are interpreted as being largely dominated by a remapping of RFs. Critically, these observations appear at odds with a previous study reporting a different type of RF dynamics within the same brain structure (Tolias et al., 2001), consisting of a shrinkage and shift of RFs towards the movement target. Importantly, RFs have been measured with different techniques in those studies. Here, we measured V4 RFs comparable to Neupane et al. (2016a,b) and observe a shrinkage and shift of RFs towards the movement target when analyzing the immediate stimulus response (Zirnsak et al., 2014). When analyzing the late stimulus response (Neupane et al., 2016a,b), we observe RF shifts resembling remapping. We discuss possible causes for these shifts and point out important issues which future studies on RF dynamics need to address.
The macaque occipitotemporal cortex contains clusters of neurons with preferences for categories such as faces, body parts, and places. One common question is how these clusters (or "domains") acquire their cortical position along the ventral stream. We and other investigators previously established an fMRI-level correlation among these category domains, retinotopy, and curvature preferences: for example, in inferotemporal cortex, face- and curvature-preferring domains show a central visual field bias whereas place- and rectilinear-preferring domains show a more peripheral visual field bias. Here, we have found an electrophysiological-level explanation for the correlation among domain preference, curvature, and retinotopy based on neuronal preference for short over long contours, also called end-stopping.
The detection of visual motion requires temporal delays to compare current with earlier visual input. Models of motion detection assume that these delays reside in separate classes of slow and fast thalamic cells, or slow and fast synaptic transmission. We used a data-driven modeling approach to generate a model that instead uses recurrent network dynamics with a single, fixed temporal integration window to implement the velocity computation. This model successfully reproduced the temporal response dynamics of a population of motion sensitive neurons in macaque middle temporal area (MT) and its constituent parts matched many of the properties found in the motion processing pathway (e.g., Gabor-like receptive fields (RFs), simple and complex cells, spatially asymmetric excitation and inhibition). Reverse correlation analysis revealed that a simplified network based on first and second order space-time correlations of the recurrent model behaved much like a feedforward motion energy (ME) model. The feedforward model, however, failed to capture the full speed tuning and direction selectivity properties based on higher than second order space-time correlations typically found in MT. These findings support the idea that recurrent network connectivity can create temporal delays to compute velocity. Moreover, the model explains why the motion detection system often behaves like a feedforward ME network, even though the anatomical evidence strongly suggests that this network should be dominated by recurrent feedback.
Perceptual stability requires the integration of information across eye movements. We first tested the hypothesis that motion signals are integrated by neurons whose receptive fields (RF) do not move with the eye, but stay fixed in the world. Specifically, we measured the RF properties of neurons in the middle temporal area (MT) of macaques (macaca mulatta) during the slow phase of optokinetic nystagmus. Using a novel method to estimate RF locations for both spikes and local field potentials, we found that the location on the retina that changed spike rates or local field potentials did not change with eye position; RFs moved with the eye. Second, we tested the hypothesis that neurons link information across eye positions by remapping the retinal location of their RFs to future locations. To test this we compared RF locations during leftward and rightward slow phases of optokinetic nystagmus. We found no evidence for remapping during slow eye movements; the RF location was not affected by eye movement direction. Taken together, our results show that RFs of MT neurons and the aggregate activity reflected in local field potentials are yoked to the eye during slow eye movements. This implies that individual MT neurons do not integrate sensory information from a single position in the world across eye movements. Future research will have to determine whether such integration, and the construction of perceptual stability, takes place in the form of a distributed population code in eye-centered visual cortex, or is deferred to downstream areas.
Most approaches to visual prostheses have focused on the retina, and for good reasons. The earlier that one introduces signals into the visual system, the more one can take advantage of its prodigious computational abilities. For methods that make use of microelectrodes to introduce electrical signals, however, the limited density and volume occupying nature of the electrodes place severe limits on the image resolution that can be provided to the brain. In this regard, non-retinal areas in general, and the primary visual cortex in particular, possess one large advantage: “magnification factor” (MF)—a value that represents the distance across a sheet of neurons that represents a given angle of the visual field. In the foveal representation of primate primary visual cortex, the MF is enormous—on the order of 15–20 mm/deg in monkeys and humans, whereas on the retina, the MF is limited by the optical design of the eye to around 0.3 mm/deg. This means that, for an electrode array of a given density, a much higher- resolution image can be introduced into V1 than onto the retina (or any other visual structure). In addition to this tremendous advantage in resolution, visual cortex is plastic at many different levels ranging from a very local ability to learn to better detect electrical stimulation to higher levels of learning that permit human observers to adapt to radical changes to their visual inputs. We argue that the combination of the large magnification factor and the impressive ability of the cerebral cortex to learn to recognize arbitrary patterns, might outweigh the disadvantages of bypassing earlier processing stages and makes V1 a viable option for the restoration of vision.
The local field potential (LFP) is generally thought to be dominated by synaptic activity within a few hundred microns of the recording electrode. The sudden onset of a visual stimulus causes a large downward deflection of the LFP recorded in primary visual cortex, known as a visually evoked potential (VEP), followed by rhythmic oscillations in the gamma range (30-80 Hz) that are often in phase with action potentials of nearby neurons. By inactivating higher visual areas that send feedback projections to V1, we produced a large decrease in amplitude of the VEP, and a strong attenuation of gamma rhythms in both the LFP and multi-unit activity, despite an overall increase in neuronal spike rates. Our results argue that much of the recurrent, rhythmic activity measured in V1 is strongly gated by feedback from higher areas, consistent with models of coincidence detection that result in burst firing by layer 5 pyramidal neurons.
Neurons in primate inferotemporal cortex (IT) are clustered into patches of shared image preferences. Functional imaging has shown that these patches are activated by natural categories (e.g., faces, body parts, and places), artificial categories (numerals, words) and geometric features (curvature and real-world size). These domains develop in the same cortical locations across monkeys and humans, which raises the possibility of common innate mechanisms. Although these commonalities could be high-level template-based categories, it is alternatively possible that the domain locations are constrained by low-level properties such as end-stopping, eccentricity, and the shape of the preferred images. To explore this, we looked for correlations among curvature preference, receptive field (RF) end-stopping, and RF eccentricity in the ventral stream. We recorded from sites in V1, V4, and posterior IT (PIT) from six monkeys using microelectrode arrays. Across all visual areas, we found a tendency for end-stopped sites to prefer curved over straight contours. Further, we found a progression in population curvature preferences along the visual hierarchy, where, on average, V1 sites preferred straight Gabors, V4 sites preferred curved stimuli, and many PIT sites showed a preference for curvature that was concave relative to fixation. Our results provide evidence that high-level functional domains may be mapped according to early rudimentary properties of the visual system.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.