2007
DOI: 10.1167/7.5.4
|View full text |Cite
|
Sign up to set email alerts
|

Computations for geometrically accurate visually guided reaching in 3-D space

Abstract: A fundamental question in neuroscience is how the brain transforms visual signals into accurate three-dimensional (3-D) reach commands, but surprisingly this has never been formally modeled. Here, we developed such a model and tested its predictions experimentally in humans. Our visuomotor transformation model used visual information about current hand and desired target positions to compute the visual (gaze-centered) desired movement vector. It then transformed these eye-centered plans into shoulder-centered … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

3
112
2

Year Published

2009
2009
2014
2014

Publication Types

Select...
7
2

Relationship

1
8

Authors

Journals

citations
Cited by 79 publications
(119 citation statements)
references
References 116 publications
3
112
2
Order By: Relevance
“…A feed-forward threelayer neural network trained using back-propogation accomplishes this using gaze-centered, gain-modulated nodes, not intermediate coding (11,12,14,16,17,30,36,37). Although it is true that neural networks can be designed to produce intermediate representations (15,17,26), it nonetheless behooves us to ask if intermediate representations might be artifactual.…”
Section: Resultsmentioning
confidence: 99%
“…A feed-forward threelayer neural network trained using back-propogation accomplishes this using gaze-centered, gain-modulated nodes, not intermediate coding (11,12,14,16,17,30,36,37). Although it is true that neural networks can be designed to produce intermediate representations (15,17,26), it nonetheless behooves us to ask if intermediate representations might be artifactual.…”
Section: Resultsmentioning
confidence: 99%
“…We also incorporated static and dynamic VOR to account for ocular counter-roll with different gains. Retinal geometry was modeled using spherical projections (see also Blohm and Crawford 2007). Using quaternion algebra, eye-in-head rotation can be described by the following quaternion q (Tweed 1997a) …”
Section: Discussionmentioning
confidence: 99%
“…A6). Finally we implemented the spherical projection geometry of the visual image onto the retina (Blohm and Crawford 2007;Crawford and Guitton 1997), which-together with Listing's law-resulted in the misalignment for the spatial and retinal axes in oblique eye positions. The predictions of this model will be described in more details in RESULTS. We proposed two extreme working hypotheses that should allow interpreting the experimental data.…”
Section: Modelmentioning
confidence: 99%
“…In recent years, much emphasis has been placed on the visuomotor transformations underlying reach planning (see Crawford et al 2011 for review) where it has been shown that both eye and head position are taken into account in the three-dimensional (3D) transformation of eyecentered visual signals into shoulder-centered motor commands (Blohm and Crawford 2007;Leclercq et al 2012).…”
Section: Soechting and Flanders 1989mentioning
confidence: 99%