Each of four groups of 12 subjects performed four psychophysical tasks. The age ranges of the four groups were 19-31, 45-57, 58-70, and 71-83 years, respectively. All four tasks required some form of visual information processing: Two were backward-masking tasks; two were temporal-integration tasks. In all four tasks increasing temporal functions over age were obtained, suggesting slower processing rates as age increased. The results support an active processing model of visual perception that interprets duration of visible persistence and duration of interval in which backward masking is effective as indices of the time course of early stages in the processing of stimulus features. The evidences also suggests that backward masking and visible persistence may be mediated by distinct mechanisms that are affected differently by aging processes. A model that conceptualizes the visual system as a multichannel processor is proposed as an explanation for some of the findings.
The operational (airborne) Enhanced/Synthetic Vision System will employ a helmet-mounted display with a background synthetic image encompassing a fused inset sensor image. In the present study, three subjects viewed an emulation of a descending flight to a crash site displayed on an SVGA monitor. Independent variables were: 3 fusion algorithms; 3 visibility conditions; 2 sensor conditions; and 9 sensor/synthetic image misregistration conditions. The task was to detect specified terrain features, objects and image anomalies as they became visible in 16 successive fused image snapshots along the flight path. The fusion of synthetic images with corresponding sensor images supported consistent subject performance with the simpler algorithms (averaging and differencing). Performance with the more complex opponent process algorithm was less consistent and more image anomalies were generated. Reductions in synthetic scene resolution did not degrade performance, but elevation source data errors interfered with scene interpretation. These results will be discussed within the context of operational requirements.
z ABSTRACT There is interest in the development of synthetic visual systems to improve the capability of aircraft to takeoff and land in poor vlslblhty. These systems often have inherent processing delays that can affect a pilot's ability to control an aircrafl and a pilot's sense of orientation. The goal of the current study was to determine how much time delay a pilot could tolerate before control was affected, and whether physiological effects would be apparent at the same point. Pilots hovered at a predetermined position in a full flight simulator equipped with a Computer Image Generation (GIG) system and a helmet mounted display. The pilot's visual image was delayed by 67 to 334 milliseconds and varying levels ofturbulence were applied to increase the task difficulty. Pilot performance was assessed by collecting objective data on aircraft position error. Handling qualities ratings and reports of physiological symptoms were collected by questionnaire. The results showed that visual time delay increased the variability ofposition error when as little as 134 ms of delay was encountered. At long delays, sickness symptoms were reported in addition to handing qualities decrements. Turbulence had a minimal effect on performance with long time delays, however it resulted in increased station keeping errors and degraded handling qualities at low delays. 3-69
Algorithms for image fusion were evaluated as part of the development of an airborne Enhanced/Synthetic Vision System (ESVS) for helicopter Search and Rescue operations. The ESVS will be displayed on a high-resolution, wide field-of-view helmet-mounted display (FIMD). The HMD full field-of-view (FOV) will consist of a synthetic image to support navigation and situational awareness, and an infrared image inset will be fused into the center of the FOV to provide real-world feedback and support flight operations at low altitudes. Three ftision algorithms were selected for evaluation against the ESVS requirements. In particular, algorithms were modified and tested against the unique problem of presenting a useful fusion of information from high quality synthetic images with questionable real-world correlation and highly correlated sensor images of varying quality. A pixel averaging algorithm was selected as the simplest way to fuse two different sources of imagery. Two other algorithms, originally developed for real-time fusion of low-light visible images with infrared images, (one at the TNO Human Factors Institute and the other at the MIT Lincoln Laboratory) were adapted and implemented. To evaluate the algorithms' performance, artificially generated infrared images were fused with synthetic images and viewed in a sequence corresponding to a search and rescue scenario for a descent to hover. Application of all three fusion algorithms improved the raw infrared image, but the MIT-based algorithm generated some undesirable effects such as contrast reversals. This algorithm was also computationally intensive and relatively difficult to tune. The pixel averaging algorithm was simplest in terms ofper-pixel operations and provided good results. The TNO-based algorithm was superior in that while it was slightly more complex than pixel averaging, it demonstrated similar results, was more flexible, and had the advantage of predictably preserving certain synthetic features which could be used support obstacle detection.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.