The brain processes information through multiple layers of neurons. This deep architecture is representationally powerful, but complicates learning because it is difficult to identify the responsible neurons when a mistake is made. In machine learning, the backpropagation algorithm assigns blame by multiplying error signals with all the synaptic weights on each neuron's axon and further downstream. However, this involves a precise, symmetric backward connectivity pattern, which is thought to be impossible in the brain. Here we demonstrate that this strong architectural constraint is not required for effective error propagation. We present a surprisingly simple mechanism that assigns blame by multiplying errors by even random synaptic weights. This mechanism can transmit teaching signals across multiple layers of neurons and performs as effectively as backpropagation on a variety of tasks. Our results help reopen questions about how the brain could use error signals and dispel long-held assumptions about algorithmic constraints on learning.
1. This paper develops three-dimensional models for the vestibuloocular reflex (VOR) and the internal feedback loop of the saccadic system. The models differ qualitatively from previous, one-dimensional versions, because the commutative algebra used in previous models does not apply to the three-dimensional rotations of the eye. 2. The hypothesis that eye position signals are generated by an eye velocity integrator in the indirect path of the VOR must be rejected because in three dimensions the integral of angular velocity does not specify angular position. Computer simulations using eye velocity integrators show large, cumulative gaze errors and post-VOR drift. We describe a simple velocity to position transformation that works in three dimensions. 3. In the feedback control of saccades, eye position error is not the vector difference between actual and desired eye positions. Subtractive feedback models must continuously adjust the axis of rotation throughout a saccade, and they generate meandering, dysmetric gaze saccades. We describe a multiplicative feedback system that solves these problems and generates fixed-axis saccades that accord with Listing's law. 4. We show that Listing's law requires that most saccades have their axes out of Listing's plane. A corollary is that if three pools of short-lead burst neurons code the eye velocity command during saccades, the three pools are not yoked, but function independently during visually triggered saccades. 5. In our three-dimensional models, we represent eye position using four-component rotational operators called quaternions. This is not the only algebraic system for describing rotations, but it is the one that best fits the needs of the oculomotor system, and it yields much simpler models than do rotation matrix or other representations. 6. Quaternion models predict that eye position is represented on four channels in the oculomotor system: three for the vector components of eye position and one inversely related to gaze eccentricity and torsion. 7. Many testable predictions made by quaternion models also turn up in models based on other mathematics. These predictions are therefore more fundamental than the specific models that generate them. Among these predictions are 1) to compute eye position in the indirect path of the VOR, eye or head velocity signals are multiplied by eye position feedback and then integrated; consequently 2) eye position signals and eye or head velocity signals converge on vestibular neurons, and their interaction is multiplicative.(ABSTRACT TRUNCATED AT 400 WORDS)
We scan our surroundings with quick eye movements called saccades, and from the resulting sequence of images we build a unified percept by a process known as transsaccadic integration. This integration is often said to be flawed, because around the time of saccades, our perception is distorted and we show saccadic suppression of displacement (SSD): we fail to notice if objects change location during the eye movement. Here we show that transsaccadic integration works by optimal inference. We simulated a visuomotor system with realistic saccades, retinal acuity, motion detectors and eye-position sense, and programmed it to make optimal use of these imperfect data when interpreting scenes. This optimized model showed human-like SSD and distortions of spatial perception. It made new predictions, including tight correlations between perception and motor action (for example, more SSD in people with less-precise eye control) and a graded contraction of perceived jumps; we verified these predictions experimentally. Our results suggest that the brain constructs its evolving picture of the world by optimally integrating each new piece of sensory or motor information.
1. Do slow phase eye velocities generated by the vestibuloocular reflex (VOR) depend on eye position? If the purpose of the VOR is simply to stabilize the retinal image, there can be no such dependence, because eye velocity must always be equal and opposite to head velocity. But if the VOR tolerates some retinal slip to achieve other goals, such as reducing eye velocity or following Listing's law, then one should see specific patterns of dependence. We examined VOR responses of human subjects to yaw, pitch, and roll rotation looking in various directions to quantify how the input-output properties of the VOR vary with eye position. 2. Eye rotation axes during yaw and pitch tilted in the same direction as the gaze line but only one-quarter as far on average. Thus, during yaw head rotation, the axis of eye rotation was roughly aligned with the head axis when the subject looked straight ahead, but tilted up when the gaze direction was up, and down when gaze was down. The amount of tilt varied between subjects, but on average a 30 degrees change in eye position caused a 7.5 degrees tilt in the eye rotation axis. During pitch, the eye axis tilted right when gaze was right and left when gaze was left, also moving 7.5 degrees on average for a 30 degrees change in the gaze direction. 3. During roll stimulation, the axis of eye rotation tilted in the opposite direction to the gaze line, and about one-half as far. On average, when the gaze line moved 30 degrees down, the eye rotation axis tilted 12.0 degrees up; when the gaze moved 30 degrees left, the eye axis tilted 15.3 degrees right. 4. It is often argued that the torsional VOR is weak because head rotation about the line of sight causes little image displacement on the fovea. But the line of sight is collinear with the torsional axis only when the subject looks straight ahead. Does the "weak axis" of the VOR stay collinear with the gaze line when the subject looks eccentrically? We calculated the axis of head rotation for which the VOR response is weakest and found that it does vary with eye position, but does not stay parallel with the gaze direction. When subjects looked straight ahead, the weak axis was roughly collinear with the gaze line; when gaze shifted eccentrically, the weak axis shifted in the same direction but only about one-half as far.(ABSTRACT TRUNCATED AT 400 WORDS)
The properties of the vestibuloocular reflex (VOR) when the axis of rotation is behind the eyes and fixation of a near target is required were studied in the monkey. The magnitude of VOR gain in each eye was found to be above 1.0 and near the ideal value for stabilizing a retinal image. Evidence that this large VOR gain was not visually mediated was provided by the observations that no reduction in gain and no phase lag were observed at high frequencies of head rotation (2 Hz), large gain was observed in the dark, and large gain was observed within 10-20 ms of the start of head rotation. The magnitude of VOR gain was found to increase with increasing radius of head rotation and also to increase with decreasing target distance. When the distances from the two eyes to the target were different the instantaneous velocities and VOR gains of the eyes were also different. The dependence on radius of rotation indicates that the VOR is mediated by a combination of otolith and canal inputs. A general model for otolith-canal interaction is proposed in which VOR gain is based on a computation of target location relative to the head. This model simplifies to the classical VOR reflex when a cyclopean eye is subjected only to angular displacement.
When we view objects at various depths, the 3-D rotations of our two eyes are neurally yoked in accordance with a recently discovered geometric rule, here called the binocular extension of Listing's law; or L2. This paper examines the visual and motor consequences of this rule. Although L2 is a generalization of Listing's original, monocular law, it does not follow from current theories of the latter's function, which involve minimizing muscle work or optimizing certain aspects of retinal image flow. This study shows that a new optimization strategy that combines stereo vision with motor efficiency does explain L2, and describes the predictions of this new theory. Contrary to recent suggestions in the literature, L2 does not ensure vision of lines orthogonal to the visual plane, but rather reduces cyclodisparity of the visual plane itself; and L2 does not arise because a single, conjugate angular velocity command is sent to both eyes, but actually requires that the two eyes rotate with different speeds and axes when scanning an isovergence surface. This study shows that L2 is compatible with a 1-D control system for vergence alone (because horizontal and torsional vergence are yoked) and a 3-D system for combined, head-fixed saccades and vergence.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.