2015
DOI: 10.1152/jn.00273.2014
|View full text |Cite
|
Sign up to set email alerts
|

Computations underlying the visuomotor transformation for smooth pursuit eye movements

Abstract: Smooth pursuit eye movements are driven by retinal motion and enable us to view moving targets with high acuity. Complicating the generation of these movements is the fact that different eye and head rotations can produce different retinal stimuli but giving rise to identical smooth pursuit trajectories. However, because our eyes accurately pursue targets regardless of eye and head orientation (Blohm G, Lefèvre P. J Neurophysiol 104: 2103-2115, 2010), the brain must somehow take these signals into account. To … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

0
4
0

Year Published

2016
2016
2019
2019

Publication Types

Select...
3
2

Relationship

4
1

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 122 publications
0
4
0
Order By: Relevance
“…Hence, they provide viable information for a coordinate transformation of visual signals from an eye‐centered to a head‐centered frame of reference at the population level. Such a transformation is thought to be necessary not only for a stable perception of our environment (Zipser & Andersen, ; Salinas & Abbott, ; Bremmer, ), but also for the computation of pursuit motor commands in the correct reference frame (Blohm & Lefèvre, ; Murdison et al ., ). It remains to be determined, if explicit head‐centered representations at the single cell level, which have been shown for area VIP during steady fixation (Duhamel et al ., ; Avillac et al ., ; Schlack et al ., ), can also be found across eye movements.…”
Section: Discussionmentioning
confidence: 97%
“…Hence, they provide viable information for a coordinate transformation of visual signals from an eye‐centered to a head‐centered frame of reference at the population level. Such a transformation is thought to be necessary not only for a stable perception of our environment (Zipser & Andersen, ; Salinas & Abbott, ; Bremmer, ), but also for the computation of pursuit motor commands in the correct reference frame (Blohm & Lefèvre, ; Murdison et al ., ). It remains to be determined, if explicit head‐centered representations at the single cell level, which have been shown for area VIP during steady fixation (Duhamel et al ., ; Avillac et al ., ; Schlack et al ., ), can also be found across eye movements.…”
Section: Discussionmentioning
confidence: 97%
“…We generated predictions for retinal torsion using the quaternion algebraic formulation developed in previous work from our lab (Blohm & Crawford, 2007;Blohm & Lefèvre, 2010;Murdison, Leclercq, Lefèvre, & Blohm, 2015). Briefly, this consisted of finding the torsional difference between screen coordinates and retinal coordinates, based on the orientation of Listing's plane for each participant with measured tilt a 0 (see Table 1):…”
Section: Discussionmentioning
confidence: 99%
“…The implication that the brain expends computational energy with each eye movement to predictively remap a (spatially incorrect) retinal perception is seemingly paradoxical; after all, in theory the brain has access to all the self-motion signals required to compensate for retinal blurring and/or retino-spatial misalignments. However, compensating for self-motion requires either updating of a nonspatial (e.g., retinal) representation (Henriques, Klier, Smith, Lowy, & Crawford, 1998;Medendorp, Van Asselt, & Gielen, 1999;Murdison et al, 2013) or subjecting sensory signals to reference frame transformations (Blohm & Crawford, 2007;Blohm & Lefèvre, 2010;Murdison et al, 2015) to achieve spatial accuracy. As both updating (Medendorp et al, 1999) and reference frame transformations appear to be stochastic processes (Alikhanian, Carvalho, & Blohm, 2015;Burns & Blohm, 2010;Burns, Nashed, & Blohm, 2011;Schlicht & Schrater, 2007;Sober & Sabes, 2003), eye-centered signals might provide high acuity sensory information on which to base working memory (Golomb, Chun, & Mazer, 2008), perception (Burns et al, 2011;Rolfs et al, 2011) and movement generation (Schlicht & Schrater, 2007;Sober & Sabes, 2003) explicitly requiring a reference frame transformation.…”
Section: Discussionmentioning
confidence: 99%
“…Biases could arise within motion direction perception, which have been reported both for motion in depth (Duke & Rushton, 2012; Harris & Dean, 2003; Harris & Drga, 2005; Welchman, Tuck, & Harris, 2004) and in the frontal plane (Hubbard, 1990; Souman, Hooge, & Wertheim, 2005; Post & Chaderjian, 1987; Tynan & Sekuler, 1982). Besides in the actual information used, biases may depend on how motion signals are coded and combined (Baddeley & Tripathy, 1998; Barlow & Tripathy, 1997; Kwon, Tadin, & Knill, 2015; Leclercq, Blohm, & Lefèvre, 2012; Murdison, Leclercq, Lefèvre, & Blohm, 2015; Weiss, Simoncelli, & Andelson, 2002). Evidently, if motion extrapolation is based on biased motion signals, it should show systematic biases in absence of compensatory mechanisms.…”
Section: Discussionmentioning
confidence: 99%