Planetary landers have, in the past, relied on physical means to protect the payload from the shock of impact on the surface [1]. These landers, starting their descent from orbit with their initial position only known to a few kilometres, were not required to land at a particular landing spot, but only to land safely.Today, much more knowledge, obtained from earlier landings and high-resolution orbiting instruments, is available about the surfaces of some planets than was available when previous landers were designed.Missions are becoming more demanding in terms of the accuracy of landing and significant effort is now focused on the design of surface relative navigation systems.Surface relative navigation requires a sensor that can pick out features or landmarks on the surface and use these to track the position of the spacecraft relative to the surface -passive and active vision-based navigation sensors are currently being developed.The testing of these sophisticated sensors, in particular the image processing parts, required the development of a realistic, large-scale test bed, representative of the real planet's surface. Physical modelling was not able to meet the needs of the sensor testing, so a virtual reality tool has been developed.PANGU (Planet and Asteroid Natural Scene Generation Utility) is a software tool for simulating and visualising the surface of various planetary bodies. It has been designed to support the development of planetary landers that use computer vision to navigate towards the surface and to avoid any obstacles near the landing site. PANGU can be used to generate an artificial surface representative of cratered planets and to provide images of the simulated planet. When given the position and orientation of a spacecraft above the planet's surface, PANGU responds by producing an image of the surface from that view point. Current research is extending the capabilities of PANGU so that Martian surfaces and asteroids can also be simulated. This paper describes the PANGU simulation tool in detail and provides example images of the simulated surface as seen from a descending planetary lander. Downloaded by 178.174.154.35 on June 21, 2016 | http://arc.aiaa.org |
The use of machine vision to guide robotic spacecraft is being considered for a wide range of missions, such as planetary approach and landing, asteroid and small body sampling operations and in-orbit rendezvous and docking. Numerical simulation plays an essential role in the development and testing of such systems, which in the context of vision-guidance means that realistic sequences of navigation images are required, together with knowledge of the groundtruth camera motion. Computer generated imagery (CGI) offers a variety of benefits over real images, such as availability, cost, flexibility and knowledge of the ground truth camera motion to high precision. However, standard CGI methods developed for terrestrial applications lack the realism, fidelity and performance required for engineering simulations.In this paper, we present the results of our ongoing work to develop a suitable CGI-based test environment for spacecraft vision guidance systems. We focus on the various issues involved with image simulation, including the selection of standard CGI techniques and the adaptations required for use in space applications. We also describe our approach to integration with high-fidelity end-to-end mission simulators, and summarise a variety of European Space Agency research and development projects that used our test environment.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.