The operational (airborne) Enhanced/Synthetic Vision System will employ a helmet-mounted display with a background synthetic image encompassing a fused inset sensor image. In the present study, three subjects viewed an emulation of a descending flight to a crash site displayed on an SVGA monitor. Independent variables were: 3 fusion algorithms; 3 visibility conditions; 2 sensor conditions; and 9 sensor/synthetic image misregistration conditions. The task was to detect specified terrain features, objects and image anomalies as they became visible in 16 successive fused image snapshots along the flight path. The fusion of synthetic images with corresponding sensor images supported consistent subject performance with the simpler algorithms (averaging and differencing). Performance with the more complex opponent process algorithm was less consistent and more image anomalies were generated. Reductions in synthetic scene resolution did not degrade performance, but elevation source data errors interfered with scene interpretation. These results will be discussed within the context of operational requirements.
Algorithms for image fusion were evaluated as part of the development of an airborne Enhanced/Synthetic Vision System (ESVS) for helicopter Search and Rescue operations. The ESVS will be displayed on a high-resolution, wide field-of-view helmet-mounted display (FIMD). The HMD full field-of-view (FOV) will consist of a synthetic image to support navigation and situational awareness, and an infrared image inset will be fused into the center of the FOV to provide real-world feedback and support flight operations at low altitudes. Three ftision algorithms were selected for evaluation against the ESVS requirements. In particular, algorithms were modified and tested against the unique problem of presenting a useful fusion of information from high quality synthetic images with questionable real-world correlation and highly correlated sensor images of varying quality. A pixel averaging algorithm was selected as the simplest way to fuse two different sources of imagery. Two other algorithms, originally developed for real-time fusion of low-light visible images with infrared images, (one at the TNO Human Factors Institute and the other at the MIT Lincoln Laboratory) were adapted and implemented. To evaluate the algorithms' performance, artificially generated infrared images were fused with synthetic images and viewed in a sequence corresponding to a search and rescue scenario for a descent to hover. Application of all three fusion algorithms improved the raw infrared image, but the MIT-based algorithm generated some undesirable effects such as contrast reversals. This algorithm was also computationally intensive and relatively difficult to tune. The pixel averaging algorithm was simplest in terms ofper-pixel operations and provided good results. The TNO-based algorithm was superior in that while it was slightly more complex than pixel averaging, it demonstrated similar results, was more flexible, and had the advantage of predictably preserving certain synthetic features which could be used support obstacle detection.
This paper presents a novel image-based approach for updating the geometry of 3D models. The technique can cope with large-scale models, using a single imaging sensor to which an arbitrary motion is applied. Current approaches usually do not fully take advantage of strong prior information, often available in the form of an initial model. The approach is thus novel in that geometric anomalies are quickly detected, significantly reducing problem complexity. Hence, given a geometric model and known camera motion, the image warping can be calculated and intensity patterns can be predicted. If predictions do not match observations, the model is assumed to be incorrect. The updating is then cast as an optimization problem where differences between observations and predictions are minimized. The algorithm is tested against both synthetic and real imaging data to update a terrain model. Results show that the algorithm can automatically detect and correct geometrical problems of different types and sizes.
Synthetic vision systems render artificial images of the world based on a database and position/attitude information of the aircraft. Due to both its static nature and inherent modelling errors, the database introduces anomalies in the synthetic imagery. Since it reflects at best a nominal state of the environment, it often requires updating via online measurements. The latter can vary from correction of pose and geometry to more complex operations such as marking the locations of detected obstacles. This paper presents an approach for detecting database geometric anomalies online. Since range sensors have a low update rate, they cannot be used for quick validation. Instead of range data, the proposed technique employs an imaging sensor, which can be of any type. It takes advantage of the fact that given a geometric model of the scene and known motion of the observer, the sensor image warping can be exactly predicted. If the geometry of the database is incorrect, the sensor image will not be correctly predicted and geometric differences will thus be detected. The algorithm is tested against simulated imagery and results show that it can correctly identify geometric anomalies. It can cope with known misalignment of the database and pose estimation errors. The technique is shown to be quite robust in low visibility conditions, given that the validated features are at least partially visible. It also automatically detects if a motion is sufficient for a given sensitivity.
In aviation, synthetic vision systems produce artificial views of the world to support navigation and situational awareness in poor visibility conditions. Synthetic images of local terrain are rendered from a database and registered through the aircraft navigation system. Because the database reflects, at best, a nominal state of the environment, it needs to be verified to ensure its consistency with reality. This paper presents a technique for real-time verification of databases using a single imaging device, of any type. It is differential and as such, requires motion of the sensor. The geometric information of the database is used to predict how the sensor image should change. If the measured change is different from the predicted change, the database geometry is assumed to be incorrect. Geometric anomalies are localized and their severity is estimated in absolute terms using a minimization process. The technique is tested against real flight data acquired by an helicopter to verify a database consisting of a digital elevation map. Results show that geometric anomalies can be detected and that their location and importance can be evaluated.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.