Planetary landers have, in the past, relied on physical means to protect the payload from the shock of impact on the surface [1]. These landers, starting their descent from orbit with their initial position only known to a few kilometres, were not required to land at a particular landing spot, but only to land safely.Today, much more knowledge, obtained from earlier landings and high-resolution orbiting instruments, is available about the surfaces of some planets than was available when previous landers were designed.Missions are becoming more demanding in terms of the accuracy of landing and significant effort is now focused on the design of surface relative navigation systems.Surface relative navigation requires a sensor that can pick out features or landmarks on the surface and use these to track the position of the spacecraft relative to the surface -passive and active vision-based navigation sensors are currently being developed.The testing of these sophisticated sensors, in particular the image processing parts, required the development of a realistic, large-scale test bed, representative of the real planet's surface. Physical modelling was not able to meet the needs of the sensor testing, so a virtual reality tool has been developed.PANGU (Planet and Asteroid Natural Scene Generation Utility) is a software tool for simulating and visualising the surface of various planetary bodies. It has been designed to support the development of planetary landers that use computer vision to navigate towards the surface and to avoid any obstacles near the landing site. PANGU can be used to generate an artificial surface representative of cratered planets and to provide images of the simulated planet. When given the position and orientation of a spacecraft above the planet's surface, PANGU responds by producing an image of the surface from that view point. Current research is extending the capabilities of PANGU so that Martian surfaces and asteroids can also be simulated. This paper describes the PANGU simulation tool in detail and provides example images of the simulated surface as seen from a descending planetary lander. Downloaded by 178.174.154.35 on June 21, 2016 | http://arc.aiaa.org |
We present an autonomous visual landmark recognition and pose estimation algorithm designed for use in navigation of spacecraft around small asteroids. Landmarks are selected as generic points on the asteroid surface that produce strong Harris corners in an image under a wide range in viewing and illumination conditions; no particular type of morphological feature is required. The set of landmarks is triangulated to obtain a tightly fitting mesh representing an optimal low resolution model of the natural asteroid shape, which is used onboard to determine the visibility of each landmark and enables the algorithm to work with highly concave bodies. The shape model is also used to estimate the centre of brightness of the asteroid and eliminate large translation errors prior to the main landmark recognition stage. The algorithm works by refining an initial estimate of the spacecraft position and orientation. Tests with real and synthetic images show good performance under realistic noise conditions. Using simulated images, the median landmark recognition error is 2m, and the error on the spacecraft position in the asteroid body frame is reduced from 45m to 21m at a range of 2km from the surface. With real images the translation error at 8km to the surface increases from 107m to 119m, due mainly to the larger range and lack of sensitivity to translations along the camera boresight. The median number of landmarks detected in the simulated and real images is 59 and 44 respectively. This algorithm was partly developed and tested during industrial studies for the European Space Agency’s Marco Polo-R asteroid sample return mission.
Abstract-Spacecraft exploration of asteroids presents a variety of autonomous navigation challenges that can be aided by virtual models to test and develop guidance and hazard avoidance systems. This paper describes the extension and application of graphics techniques to create high-resolution, virtual asteroid models to simulate cameras and other spacecraft sensors approaching and descending towards asteroids. A scalable model structure with evenly spaced vertices is specified to simplify terrain modeling, avoid distortion at the poles and enable triangle strip definition for efficient rendering. The base asteroid models are created using both a two-phase Poisson faulting technique and Perlin noise. Realistic asteroid surfaces are created by adding synthetic crater models adapted from lunar terrain simulation and multi-resolution boulders to the base models. The synthetic asteroids are evaluated by comparison with real asteroid images, slope distributions, and by applying a surface relative feature tracking algorithm to the models.
The use of machine vision to guide robotic spacecraft is being considered for a wide range of missions, such as planetary approach and landing, asteroid and small body sampling operations and in-orbit rendezvous and docking. Numerical simulation plays an essential role in the development and testing of such systems, which in the context of vision-guidance means that realistic sequences of navigation images are required, together with knowledge of the groundtruth camera motion. Computer generated imagery (CGI) offers a variety of benefits over real images, such as availability, cost, flexibility and knowledge of the ground truth camera motion to high precision. However, standard CGI methods developed for terrestrial applications lack the realism, fidelity and performance required for engineering simulations.In this paper, we present the results of our ongoing work to develop a suitable CGI-based test environment for spacecraft vision guidance systems. We focus on the various issues involved with image simulation, including the selection of standard CGI techniques and the adaptations required for use in space applications. We also describe our approach to integration with high-fidelity end-to-end mission simulators, and summarise a variety of European Space Agency research and development projects that used our test environment.
The success of examiners in classifying test items in terms of the levels of a taxonomy of educational skills and curriculum core categories in a national item pool is examined by comparing examiner's classifications with item difiiculty.The relation between taxonomy level and difficulty was found to be greater than for core categories but in neither case was it very marked. The implications of this finding are discussed.(1) COMPUTERS AND ITEM BANKS Computers were so named because the only significant work given the early models was computation. However, computers can be used for many other purposes, in particular for information retrieval and processing, beiig used in the same way as a person consults a library, a filing system, a directory or a dictionary.One such use in the area of test construction is the storage and retrieval of actual test questions and associated information, known as ' item banking '.Currently computer based systems are being developed which completely replace the routine work involved in exanlining, and only require examiners to decide what skills and content are to be tested and to appraise the final results. The system selects items to fulfil the examiner's requirements, arranges them in a desired order, and prints the test. Once students have taken the test, the system reads and scores the answer sheets, and a computed analysis is provided for the examiner by which he can evaluate the test as a whole, and in terms of individual items. Diagnostic teaching information is also supplied and the bank items and statistics updated.Item banks used by several institutions offer a number of advantages. By using a bank's facilities, an individual examiner generally has to contribute only a small fraction of items he would normally have to write if he were coristructing a test on his own. Other items are contributed by examiners in other institutions and items contributed in previous years are re-used.Under these conditions, an examiner can contribute his best items and the fact that the items will be on 'professional view' and
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.