Under the umbrella of the European Space Agency (ESA) StarTiger program, a rapid prototyping study called Seeker was initiated. A range of partners from space and nonspace sectors were brought together to develop a prototype Mars rover system capable of autonomously exploring several kilometers of highly representative Mars terrain over a three‐day period. This paper reports on our approach and the final field trials that took place in the Atacama Desert, Chile. Long‐range navigation and the associated remote rover field trials are a new departure for ESA, and this activity therefore represents a novel initiative in this area. The primary focus was to determine if current computer vision and artificial intelligence based software could enable such a capability on Mars, given the current limit of around 200 m per Martian day. The paper does not seek to introduce new theoretical techniques or compare various approaches, but it offers a unique perspective on their behavior in a highly representative environment. The final system autonomously navigated 5.05 km in highly representative terrain during one day. This work is part of a wider effort to achieve a step change in autonomous capability for future Mars/lunar exploration rover platforms.
This paper presents the Mojave Desert field test results of planetary rover visual motion estimation (VME) developed under the “Autonomous, Intelligent, and Robust Guidance, Navigation, and Control for Planetary Rovers (AIR‐GNC)” project. Three VME schemes are compared in realistic conditions. The main innovations of this project include the use of different features from stereo‐pair images as visual landmarks and the use of vision‐based feedback to close the path‐tracking loop. The multiweek field campaign, conducted on relevant Mars analogue terrains, under dramatically changing lighting and weather conditions, shows good localization accuracy on the average. Moreover, the MDA‐developed inertial measurement unit (IMU)‐corrected odometry was reliable and had good accuracy at all test locations, including loose sand dunes. These results are based on data collected during 7.3 km of traverse, including both fully autonomous and joystick‐driven runs. © 2012 Wiley Periodicals, Inc.
The Rendezvous Lidar System (RLS), a high-performance scanning time-of-flight lidar jointly developed by MDA and Optech, was employed successfully during the XSS-11 spacecraft's 23-month mission. Ongoing development of the RLS mission software has resulted in an integrated pose functionality suited to safety-critical applications, specifically the terminal rendezvous of a visiting vehicle with the International Space Station (ISS). This integrated pose capability extends the contribution of the lidar from long-range acquisition and tracking for terminal rendezvous through to final alignment for docking or berthing. Innovative aspects of the technology that were developed include: 1) efficacious algorithms to detect, recognize, and compute the pose of a client spacecraft from a single scan using an intelligent search of candidate solutions, 2) automatic scene evaluation and feature selection algorithms and software that assist mission planners in specifying accurate and robust scan scheduling, and 3) optimal pose tracking functionality using knowledge of the relative spacecraft states. The development process incorporated the concept of sensor system bandwidth to address the sometimes unclear or misleading specifications of update rate and measurement delay often cited for rendezvous sensors. Because relative navigation sensors provide the measured feedback to the spacecraft GN&C, we propose a new method of specifying the performance of these sensors to better enable a full assessment of a given sensor in the closed-loop control for any given vehicle. This approach, and the tools and methods enabling it, permitted a rapid and rigorous development and verification of the pose tracking functionality. The complete system was then integrated and demonstrated in the MDA space vision facility using the flight-representative engineering model RLS lidar sensor.
This paper addresses modeling, simulation and controls of a robotic servicing system for the Hubble Space Telescope servicing missions. The simulation models of the robotic system include flexible body dynamics, control systems and geometric models of the contacting bodies. These models are incorporated into MDA's simulation facilities, the multibody dynamics simulator "Space Station Portable Operations Training Simulator (SPOTS)". Three simulation examples of the robotic servicing operations are presented: (1) capture of the Hubble Space Telescope, (2) berthing the Hubble Space Telescope to the Hubble Robotic Vehicle and (3) inserting the Wide Field Camera into the Hubble Space Telescope.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.