It has previously been reported that humans can determine their direction of 3D translation (heading) from the 2D velocity field of retinal motion experienced during self-motion through a rigid environment, as is done by current computational models of visual heading estimation from optic flow. However, these claims were supported by studies that used stimuli that contained low rotational flow rates and/or additional visual cues beyond the velocity field or a task in which observers were asked to indicate their future trajectory of self-motion (path). Thus, previous conclusions about heading estimation have been confounded by the presence of other visual factors beyond the velocity field, by the use of a path-estimation task, or both. In particular, path estimation involves an exocentric computation with respect to an environmental reference, whereas heading estimation is an egocentric computation with respect to one's line of sight. Here, we use a heading-adjustment task to demonstrate that humans can precisely estimate their heading from the velocity field, independent of visual information about path, displacement, layout, or acceleration, with accuracy robust to rotation rates at least as high as 20 deg/s. Our findings show that instantaneous velocity-field information about heading is directly available for the visual control of locomotion and steering.
Time to contact (TTC) is specified optically by tau, and studies suggest that observers are sensitive to this information. However, TTC judgements also are influenced by other sources of information, including pictorial depth cues. Therefore, it is useful to identify these sources of information and to determine whether and how their effects combine when multiple sources are available. We evaluated the effect of five depth cues on TTC judgements. Results indicate that relative size, height in field, occlusion, and motion parallax influence TTC judgements. When multiple cues are available, an integration (rather than selection) strategy is used. Finally, the combined effects of multiple cues are not always consistent with a strict additive model and may be task dependent.
Li, Li, Barbara T. Sweet, and Leland S. Stone. Effect of contrast on the active control of a moving line. J Neurophysiol 93: 2873-2886, 2005. First published December 22, 2004 doi:10.1152/jn.00200.2004. In many passive visual tasks, human perceptual judgments are contrast dependent. To explore whether these contrast dependencies of visual perception also affect closed-loop manual control tasks, we examined visuomotor performance as humans actively controlled a moving luminance-defined line over a range of contrasts. Four subjects were asked to use a joystick to keep a horizontal line centered on a display as its vertical position was perturbed by a sum of sinusoids under two control regimes. The total root mean square (RMS) position error decreased quasi-linearly with increasing log contrast across the tested range (mean slope across subjects: Ϫ8.0 and Ϫ7.7% per log 2 contrast, for the two control regimes, respectively). Frequency-response (Bode) plots showed a systematic increase in openloop gain (mean slope: 1.44 and 1.30 dB per log 2 contrast, respectively), and decrease in phase lag with increasing contrast, which can be accounted for by a decrease in response time delay (mean slope: 32 and 40 ms per log 2 contrast, respectively). The performance data are well fit by a Crossover Model proposed by McRuer and Krendel, which allowed us to identify both visual position and motion cues driving performance. This analysis revealed that the position and motion cues used to support manual control under both control regimes appear equally sensitive to changes in stimulus contrast. In conclusion, our data show that active control of a moving visual stimulus is as dependent on contrast as passive perception and suggest that this effect is attributed to a shared contrast sensitivity early in the visual pathway, before any specialization for motion processing.
Human control behavior is rarely completely stationary over time due to fatigue or loss of attention. In addition, there are many control tasks for which human operators need to adapt their control strategy to vehicle dynamics that vary in time. In previous studies on the identification of time-varying pilot control behavior wavelets were used to estimate the time-varying frequency response functions. However, the estimation of time-varying pilot model parameters was not considered. Estimating these parameters can be a valuable tool for the quantification of different aspects of human time-varying manual control. This paper presents two methods for the estimation of time-varying pilot model parameters, a two-step method using wavelets and a windowed maximum likelihood estimation method. The methods are evaluated using simulations of a closed-loop control task with time-varying pilot equalization and vehicle dynamics. Simulations are performed with and without remnant. Both methods give accurate results when no pilot remnant is present. The wavelet transform is very sensitive to measurement noise, resulting in inaccurate parameter estimates when considerable pilot remnant is present. Maximum likelihood estimation is less sensitive to pilot remnant, but cannot detect fast changes in pilot control behavior.
This paper reviews recent human factors research studies conducted in the Aerospace Human Factors Research Division at NASA Ames Research Center related to the development and usage of Enhanced or Synthetic Vision Systems. Research discussed includes studies of field of view (FOV), representational differences of infrared (IR) imagery, head-up display (HUD) symbology, HUD advanced concept designs, sensor fusion, and sensor/database fusion and evaluation. Implications for the design and usage of Enhanced or Synthetic Vision Systems are discussed.
Recent developments in fly-by-wire control architectures for rotorcraft have introduced new interest in the identification of time-varying pilot control behavior in multi-axis control tasks. In this paper a maximum likelihood estimation method is used to estimate the parameters of a pilot model with time-dependent sigmoid functions to characterize timevarying human control behavior. An experiment was performed by 9 general aviation pilots who had to perform a simultaneous roll and pitch control task with time-varying aircraft dynamics. In 8 different conditions, the axis containing the time-varying dynamics and the growth factor of the dynamics were varied, allowing for an analysis of the performance of the estimation method when estimating time-dependent parameter functions. In addition, a detailed analysis of pilots' adaptation to the time-varying aircraft dynamics in both the roll and pitch axes could be performed. Pilot control behavior in both axes was significantly affected by the time-varying aircraft dynamics in roll and pitch, and by the growth factor. The main effect was found in the axis that contained the time-varying dynamics. However, pilot control behavior also changed over time in the axis not containing the time-varying aircraft dynamics. This indicates that some cross coupling exists in the perception and control processes between the roll and pitch axes.
Humans perceive isoluminant visual stimuli (i.e., stimuli that show little or no luminance variation across space) to move more slowly than their luminance-defined counterparts. To explore whether impaired motion perception at isoluminance also affects visuomotor control tasks, the authors examined the performance as humans actively controlled a moving line. They tested two types of displays matched for an overall salience: a luminant display composed of a luminance-defined Gaussian-blurred horizontal line and an isoluminant display composed of a colordefined line with the same spatial characteristics, but near-zero luminance information. Six subjects were asked to use a joystick to keep the line centered on a cathode ray tube display as its vertical position was perturbed pseudorandomly by a sum of ten sinusoids under two control regimes (velocity and acceleration control). The mean root mean square position error was larger for the isoluminant than for the luminant line (mean across subjects: 22% and 29% larger, for the two regimes, respectively). The describing functions (Bode plots) showed that, compared to the luminant line, the isoluminant line showed a lower open-loop gain (mean decrease: 3.4 and 2.9 dB, respectively) and an increase in phase lag, which can be accounted for by an increase in reaction time (mean increase: 103 and 155 ms, respectively). The performance data are generally well fit by McRuer et al.'s classical crossover model. In conclusion, both our model-independent and modeldependent analyses show that the selective loss of luminance information impairs human active control performance, which is consistent with the preferential loss of information from cortical visual motion processing pathways. Display engineers must therefore be mindful of the importance of luminance-contrast per se (not just total stimulus salience) in the design of effective visual displays for closed-loop active control tasks.
Humans rely on a variety of visual cues to inform them of the depth or range of a particular object or feature. Some cues are provided by physiological mechanisms, others from pictorial cues that are interpreted psychologically, and still others by the relative motions of objects or features induced by observer (or vehicle) motions. These cues provide different levels of information (ordinal, relative, absolute) and saliency depending upon depth, task, and interaction with other cues. Display technologies used for head-down and head-up displays, as well as out-the-window displays, have differing capabilities for providing depth cueing information to the observer/operator. In addition to technologies, display content and the source (camera/sensor versus computer rendering) provide varying degrees of cue information. Additionally, most displays create some degree of cue conflict. In this paper, visual depth cues and their interactions will be discussed, as well as display technology and content and related artifacts. Lastly, the role of depth cueing in performing closed-loop control tasks will be discussed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2023 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.