2014
DOI: 10.1038/ncomms4047
|View full text |Cite
|
Sign up to set email alerts
|

Eye position information is used to compensate the consequences of ocular torsion on V1 receptive fields

Abstract: It is commonly held that the receptive fields (RFs) of neurons in primary visual cortex (V1) are fixed relative to the retina. Hence, V1 should be unable to distinguish between retinal image shifts due to object motion and image shifts resulting from ego motion. Here we show that, in contrast to this belief, a particular class of neurons in V1 of non-human primates have RFs that are actually head centred, despite intervening eye movements. They use eye position information to shift their RFs location and to ch… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
18
0

Year Published

2015
2015
2024
2024

Publication Types

Select...
4
2
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 18 publications
(18 citation statements)
references
References 24 publications
0
18
0
Order By: Relevance
“…Neural recordings in monkeys confirm that V1 neurons represent the visual scene in egocentric coordinates (Daddaoua et al, 2014). This finding conflicts with several earlier studies conducted in cats, which suggested that some V1 neurons might represent the visual scene relative to gravity (Denney and Adorjanti, 1972; Horn et al, 1972; Tomko et al, 1981).…”
Section: Achieving Gravity-centered Visual Perceptionmentioning
confidence: 97%
“…Neural recordings in monkeys confirm that V1 neurons represent the visual scene in egocentric coordinates (Daddaoua et al, 2014). This finding conflicts with several earlier studies conducted in cats, which suggested that some V1 neurons might represent the visual scene relative to gravity (Denney and Adorjanti, 1972; Horn et al, 1972; Tomko et al, 1981).…”
Section: Achieving Gravity-centered Visual Perceptionmentioning
confidence: 97%
“…Another form is ocular counter-roll evoked by tilting the head about the roll axis. A specific subset of V1 neurons uses information on counter-roll to render visual orientation and position in a head-centered FOR , thereby helping to establish a world-centered representation of the visual world, invariant to roll tilt of the head and body 18 . As ocular counter-roll is small, its contribution to a tilt-independent percept of the vertical is negligible.…”
Section: Discussionmentioning
confidence: 99%
“…In certain regions such as parietal area 7a and LIP, the gain modulation is usually a monotonic function of the relevant bodily posture, 22 such as eye or head position. For example, eye-position modulation often takes the form 23 of a linear or saturating function of eye-position, which multiplicatively modulates the 24 underlying Gaussian retinotopic receptive field of a visually responsive neuron [6]. 25 While in other areas, such as V6A, there can be a higher proportion of retinotopic 26 visual neurons with peaked eye-position gain fields [13].…”
mentioning
confidence: 99%
“…Moreover, even if the model has its 253 synaptic connectivity perfectly prewired to perform a coordinate transformation from 254 eye-centred input neurons with monotonic gain modulation to head-centred output 255 neurons, subsequently introducing plasticity into the synaptic connections with the 256 standard trace learning rule quickly degrades the synaptic connectivity and eventually 257 leads to elimination of the head-centred output responses. Since eye-centred visual 258 neurons with monotonic eye-position gain modulation are so common in the dorsal 259 visual pathway [7,23,24], it is an important challenge to show how efferent synaptic 260 connections from these neurons may self-organise to produce head-centred visual 261 responses in a subpopulation of postsynaptic receiving neurons. A subsequent analysis 262 of the nature of the failure of the self-organisation of the synaptic connectivities led us 263 to explore the performance of the model with a variety of modified, yet still biologically 264 plausible, more powerful versions of the standard trace learning rule.…”
mentioning
confidence: 99%