Despite general agreement that prediction is a central aspect of perception, there is relatively little evidence concerning the basis on which visual predictions are made. Although both saccadic and pursuit eye-movements reveal knowledge of the future position of a moving visual target, in many of these studies targets move along simple trajectories through a frontoparallel plane. Here, using a naturalistic and racquet-based interception task in a virtual environment, we demonstrate that subjects make accurate predictions of visual target motion, even when targets follow trajectories determined by the complex dynamics of physical interactions and the head and body are unrestrained. Furthermore, we found that, following a change in ball elasticity, subjects were able to accurately adjust their prebounce predictions of the ball's post-bounce trajectory. This suggests that prediction is guided by experience-based models of how information in the visual image will change over time.
Wilderness Search and Rescue ͑WiSAR͒ entails searching over large regions in often rugged remote areas. Because of the large regions and potentially limited mobility of ground searchers, WiSAR is an ideal application for using small ͑human-packable͒ unmanned aerial vehicles ͑UAVs͒ to provide aerial imagery of the search region. This paper presents a brief analysis of the WiSAR problem with emphasis on practical aspects of visual-based aerial search. As part of this analysis, we present and analyze a generalized contour search algorithm, and relate this search to existing coverage searches. Extending beyond laboratory analysis, lessons from field trials with search and rescue personnel indicated the immediate need to improve two aspects of UAV-enabled search: How video information is presented to searchers and how UAV technology is integrated into existing WiSAR teams. In response to the first need, three computer vision algorithms for improving video display presentation are compared; results indicate that constructing temporally localized image mosaics is more useful than stabilizing video imagery. In response to the second need, a goal-directed task analysis of the WiSAR domain was conducted and combined with field observations to identify operational paradigms and field tactics for coordinating the UAV operator, the payload operator, the mission manager, and ground searchers.
In addition to stimulus properties and task factors, memory is an important determinant of the allocation of attention and gaze in the natural world. One way that the role of memory is revealed is by predictive eye movements. Both smooth pursuit and saccadic eye movements demonstrate predictive effects based on previous experience. We have previously shown that unskilled subjects make highly accurate predictive saccades to the anticipated location of a ball prior to a bounce in a virtual racquetball setting. In this experiment, we examined this predictive behaviour. We asked whether the period after the bounce provides subjects with visual information about the ball trajectory that is used to programme the pursuit movement initiated when the ball passes through the fixation point. We occluded a 100 ms period of the ball's trajectory immediately after the bounce, and found very little effect on the subsequent pursuit movement. Subjects did not appear to modify their strategy to prolong the fixation. Neither were we able to find an effect on interception performance. Thus, it is possible that the occluded trajectory information is not critical for subsequent pursuit, and subjects may use an estimate of the ball's trajectory to programme pursuit. These results provide further support for the role of memory in eye movements.
Despite the growing popularity of virtual reality environments, few laboratories are equipped to investigate eye movements within these environments. This primer is intended to reduce the time and effort required to incorporate eye-tracking equipment into a virtual reality environment. We discuss issues related to the initial startup and provide algorithms necessary for basic analysis. Algorithms are provided for the calculation of gaze angle within a virtual world using a monocular eye-tracker in a three-dimensional environment. In addition, we provide algorithms for the calculation of the angular distance between the gaze and a relevant virtual object and for the identification of fixations, saccades, and pursuit eye movements. Finally, we provide tools that temporally synchronize gaze data and the visual stimulus and enable real-time assembly of a video-based record of the experiment using the Quicktime MOV format, available at http://sourceforge.net/p/utdvrlibraries/. This record contains the visual stimulus, the gaze cursor, and associated numerical data and can be used for data exportation, visual inspection, and validation of calculated gaze movements.
Abstract-Wilderness Search and Rescue can benefit from aerial imagery of the search area. Mini Unmanned Aerial Vehicles can potentially provide such imagery, provided that the autonomy, search algorithms, and operator control unit are designed to support coordinated human-robot search teams. Using results from formal analyses of the WiSAR problem domain, we summarize and discuss information flow requirements for WiSAR with an eye toward the efficient use of mUAVs to support search. We then identify and discuss three different operational paradigms for performing field searches, and identify influences that affect which human-robot team paradigm is best. Since the likely location of a missing person is key in determining the best paradigm given the circumstances, we report on preliminary efforts to model the behavior of missing persons in a given situation. Throughout the paper, we use information obtained from subject matter experts from Utah County Search and Rescue, and report experiences and "lessons learned" from a series of trials using human-robot teams to perform mock searches.
Wilderness search and rescue (WiSAR) requires thousands of hours of search over large and complex terrains. Mini-UAVs (unmanned aerial vehicles) may dramatically improve WiSAR search efficiency. Early field trials in UAV-enabled WiSAR indicated a need to improve the human-UAV interaction, the coordination between the UAV and ground search resources, and the UAV technology. A cognitive task analysis was conducted to inform the design of the UAV technology, the associated interface, and the roles and responsibilities associated with effectively integrating the technology into the existing WiSAR response. Two cognitive task analysis techniques were employed: goal-directed task analysis and a partial cognitive work analysis that included a work domain analysis and a control task analysis. Early field trials and WiSAR search personnel informed the task analyses, which consequently informed the UAV technology design and integration. This paper (a) reviews how and why the task analyses were conducted, how the systems engineering process incorporated field trials to inform the task analyses and to guide the technology development; and (b) provides examples of how the analyses informed the resulting technology development with an eye toward providing insight into how such analysis techniques can be applied to developing UAV systems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.