When making perceptual decisions, humans have been shown to optimally integrate independent noisy multisensory information, matching maximum-likelihood (ML) limits. Such ML estimators provide a theoretic limit to perceptual precision (i.e., minimal thresholds). However, how the brain combines two interacting (i.e., not independent) sensory cues remains an open question. To study the precision achieved when combining interacting sensory signals, we measured perceptual roll tilt and roll rotation thresholds between 0 and 5 Hz in six normal human subjects. Primary results show that roll tilt thresholds between 0.2 and 0.5 Hz were significantly lower than predicted by a ML estimator that includes only vestibular contributions that do not interact. In this paper, we show how other cues (e.g., somatosensation) and an internal representation of sensory and body dynamics might independently contribute to the observed performance enhancement. In short, a Kalman filter was combined with an ML estimator to match human performance, whereas the potential contribution of nonvestibular cues was assessed using published bilateral loss patient data. Our results show that a Kalman filter model including previously proven canal-otolith interactions alone (without nonvestibular cues) can explain the observed performance enhancements as can a model that includes nonvestibular contributions. We found that human whole body self-motion direction-recognition thresholds measured during dynamic roll tilts were significantly lower than those predicted by a conventional maximum-likelihood weighting of the roll angular velocity and quasistatic roll tilt cues. Here, we show that two models can each match this "apparent" better-than-optimal performance: ) inclusion of a somatosensory contribution and) inclusion of a dynamic sensory interaction between canal and otolith cues via a Kalman filter model.
Karmali F, Lim K, Merfeld DM. Visual and vestibular perceptual thresholds each demonstrate better precision at specific frequencies and also exhibit optimal integration. J Neurophysiol 111: 2393-2403, 2014. First published December 26, 2013 doi:10.1152/jn.00332.2013.-Prior studies show that visual motion perception is more precise than vestibular motion perception, but it is unclear whether this is universal or the result of specific experimental conditions. We compared visual and vestibular motion precision over a broad range of temporal frequencies by measuring thresholds for vestibular (subject motion in the dark), visual (visual scene motion) or visual-vestibular (subject motion in the light) stimuli. Specifically, thresholds were measured for motion frequencies spanning a two-decade physiological range (0.05-5 Hz) using single-cycle sinusoidal acceleration roll tilt trajectories (i.e., distinguishing left-side down from right-side down). We found that, while visual and vestibular thresholds were broadly similar between 0.05 and 5.0 Hz, each cue is significantly more precise than the other at certain frequencies. Specifically, we found that 1) visual and vestibular thresholds were indistinguishable at 0.05 Hz and 2 Hz (i.e., similarly precise); 2) visual thresholds were lower (i.e., vision more precise) than vestibular thresholds between 0.1 Hz and 1 Hz; and 3) visual thresholds were higher (i.e., vision less precise) than vestibular thresholds above 2 Hz. This shows that vestibular perception can be more precise than visual perception at physiologically relevant frequencies. We also found that sensory integration of visual and vestibular information is consistent with static Bayesian optimal integration of visual-vestibular cues. In contrast with most prior work that degraded or altered sensory cues, we demonstrated static optimal integration using natural cues.
We previously published vestibular perceptual thresholds and performance in the Modified Romberg Test of Standing Balance in 105 healthy humans ranging from ages 18 to 80 (1). Self-motion thresholds in the dark included roll tilt about an earth-horizontal axis at 0.2 and 1 Hz, yaw rotation about an earth-vertical axis at 1 Hz, y-translation (interaural/lateral) at 1 Hz, and z-translation (vertical) at 1 Hz. In this study, we focus on multiple variable analyses not reported in the earlier study. Specifically, we investigate correlations (1) among the five thresholds measured and (2) between thresholds, age, and the chance of failing condition 4 of the balance test, which increases vestibular reliance by having subjects stand on foam with eyes closed. We found moderate correlations (0.30–0.51) between vestibular thresholds for different motions, both before and after using our published aging regression to remove age effects. We found that lower or higher thresholds across all threshold measures are an individual trait that account for about 60% of the variation in the population. This can be further distributed into two components with about 20% of the variation explained by aging and 40% of variation explained by a single principal component that includes similar contributions from all threshold measures. When only roll tilt 0.2 Hz thresholds and age were analyzed together, we found that the chance of failing condition 4 depends significantly on both (p = 0.006 and p = 0.013, respectively). An analysis incorporating more variables found that the chance of failing condition 4 depended significantly only on roll tilt 0.2 Hz thresholds (p = 0.046) and not age (p = 0.10), sex nor any of the other four threshold measures, suggesting that some of the age effect might be captured by the fact that vestibular thresholds increase with age. For example, at 60 years of age, the chance of failing is roughly 5% for the lowest roll tilt thresholds in our population, but this increases to 80% for the highest roll tilt thresholds. These findings demonstrate the importance of roll tilt vestibular cues for balance, even in individuals reporting no vestibular symptoms and with no evidence of vestibular dysfunction.
Flying a parabolic trajectory in an aircraft is one of the few ways to create freefall on Earth, which is important for astronaut training and scientific research. Here we review the physics underlying parabolic flight, explain the resulting flight dynamics, and describe several counterintuitive findings, which we corroborate using experimental data. Typically, the aircraft flies parabolic arcs that produce approximately 25 seconds of freefall (0 g) followed by 40 seconds of enhanced force (1.8 g), repeated 30-60 times. Although passengers perceive gravity to be zero, in actuality acceleration, and not gravity, has changed, and thus we caution against the terms "microgravity" and "zero gravity. " Despite the aircraft trajectory including large (45°) pitch-up and pitch-down attitudes, the occupants experience a net force perpendicular to the floor of the aircraft. This is because the aircraft generates appropriate lift and thrust to produce the desired vertical and longitudinal accelerations, respectively, although we measured moderate (0.2 g) aft-ward accelerations during certain parts of these trajectories. Aircraft pitch rotation (average 3°/s) is barely detectable by the vestibular system, but could influence some physics experiments. Investigators should consider such details in the planning, analysis, and interpretation of parabolic-flight experiments.
Karmali F, Merfeld DM. A distributed, dynamic, parallel computational model: the role of noise in velocity storage. Networks of neurons perform complex calculations using distributed, parallel computation, including dynamic "real-time" calculations required for motion control. The brain must combine sensory signals to estimate the motion of body parts using imperfect information from noisy neurons. Models and experiments suggest that the brain sometimes optimally minimizes the influence of noise, although it remains unclear when and precisely how neurons perform such optimal computations. To investigate, we created a model of velocity storage based on a relatively new technique-"particle filtering"-that is both distributed and parallel. It extends existing observer and Kalman filter models of vestibular processing by simulating the observer model many times in parallel with noise added. During simulation, the variance of the particles defining the estimator state is used to compute the particle filter gain. We applied our model to estimate onedimensional angular velocity during yaw rotation, which yielded estimates for the velocity storage time constant, afferent noise, and perceptual noise that matched experimental data. We also found that the velocity storage time constant was Bayesian optimal by comparing the estimate of our particle filter with the estimate of the Kalman filter, which is optimal. The particle filter demonstrated a reduced velocity storage time constant when afferent noise increased, which mimics what is known about aminoglycoside ablation of semicircular canal hair cells. This model helps bridge the gap between parallel distributed neural computation and systems-level behavioral responses like the vestibuloocular response and perception. sensory estimation; Bayesian; particle filter THE BRAIN uses networks of neurons to perform complex calculations using distributed, parallel computation, including the calculations required for dynamic motion control of the body, which, in turn, relies on estimating motion using sensory cues from afferent signals that are noisy, complementary, sometimes ambiguous, and could carry incomplete information. It has been proposed that the brain processes and combines vestibular cues using internal models (e.g., Angelaki et al.
When measuring thresholds, careful selection of stimulus amplitude can increase efficiency by increasing the precision of psychometric fit parameters (e.g., decreasing the fit parameter error bars). To find efficient adaptive algorithms for psychometric threshold (“sigma”) estimation, we combined analytic approaches, Monte Carlo simulations and human experiments for a one-interval, binary forced-choice, direction-recognition task. To our knowledge, this is the first time analytic results have been combined and compared with either simulation or human results. Human performance was consistent with theory and not significantly different from simulation predictions. Our analytic approach provides a bound on efficiency, which we compared against the efficiency of standard staircase algorithms, a modified staircase algorithm with asymmetric step sizes, and a maximum likelihood estimation (MLE) procedure. Simulation results suggest that optimal efficiency at determining threshold is provided by the MLE procedure targeting a fraction correct level of 0.92, an asymmetric 4-down, 1-up (4D1U) staircase targeting between 0.86 and 0.92 or a standard 6D1U staircase. Psychometric test efficiency, computed by comparing simulation and analytic results, was between 41%–58% for 50 trials for these three algorithms, reaching up to 84% for 200 trials. These approaches were 13%–21% more efficient than the commonly-used 3D1U symmetric staircase. We also applied recent advances to reduce accuracy errors using a bias-reduced fitting approach. Taken together, the results lend confidence that the assumptions underlying each approach are reasonable, and that human threshold forced-choice decision-making is modeled well by detection-theory models and mimics simulations based on detection theory models.
Earlier spatial orientation studies used both motion-detection (e.g., did I move?) and direction-recognition (e.g., did I move left/right?) paradigms. The purpose of our study was to compare thresholds measured with motion-detection and direction-recognition tasks on a standard Moog motion platform to see whether a substantial fraction of the reported threshold variation might be explained by the use of different discrimination tasks in the presence of vibrations that vary with motion. Thresholds for the perception of yaw rotation about an earth-vertical axis and for interaural translation in an earth-horizontal plane were determined for four healthy subjects with standard detection and recognition paradigms. For yaw rotation two-interval detection thresholds were, on average, 56 times smaller than two-interval recognition thresholds, and for interaural translation two-interval detection thresholds were, on average, 31 times smaller than two-interval recognition thresholds. This substantive difference between recognition thresholds and detection thresholds is one of our primary findings. For motions near our measured detection threshold, we measured vibrations that matched previously established vibration thresholds. This suggests that vibrations contribute to whole body motion detection. We also recorded yaw rotation thresholds on a second motion device with lower vibration and found direction-recognition and motion-detection thresholds that were not significantly different from one another or from the direction-recognition thresholds recorded on our Moog platform. Taken together, these various findings show that yaw rotation recognition thresholds are relatively unaffected by vibration when moderate (up to ≈ 0.08 m/s(2)) vibration cues are present.
The brain uses information from different sensory systems to guide motor behavior, and aging is associated with simultaneous decline in the quality of sensory information provided to the brain and deterioration in motor control. Correlations between age-dependent decline in sensory anatomical structures and behavior have been demonstrated in many sensorimotor systems, and it has recently been suggested that a Bayesian framework could explain these relationships. Here we show that age-dependent changes in a human sensorimotor reflex, the vestibuloocular reflex, are explained by a Bayesian optimal adaptation in the brain occurring in response to death of motion-sensing hair cells. Specifically, we found that the temporal dynamics of the reflex as a function of age emerge from ( r = 0.93, P < 0.001) a Kalman filter model that determines the optimal behavioral output when the sensory signal-to-noise characteristics are degraded by death of the transducers. These findings demonstrate that the aging brain is capable of generating the ideal and statistically optimal behavioral response when provided with deteriorating sensory information. While the Bayesian framework has been shown to be a general neural principle for multimodal sensory integration and dynamic sensory estimation, these findings provide evidence of longitudinal Bayesian processing over the human life span. These results illuminate how the aging brain strives to optimize motor behavior when faced with deterioration in the peripheral and central nervous systems and have implications in the field of vestibular and balance disorders, as they will likely provide guidance for physical therapy and for prosthetic aids that aim to reduce falls in the elderly. NEW & NOTEWORTHY We showed that age-dependent changes in the vestibuloocular reflex are explained by a Bayesian optimal adaptation in the brain that occurs in response to age-dependent sensory anatomical changes. This demonstrates that the brain can longitudinally respond to age-related sensory loss in an ideal and statistically optimal way. This has implications for understanding and treating vestibular disorders caused by aging and provides insight into the structure-function relationship during aging.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.