U rban search and rescue missions raise special requirements on robotic systems. Small aerial systems provide essential support to human task forces in situation assessment and surveillance. As external infrastructure for navigation and communication is usually not available, robotic systems must be able to operate autonomously. A limited payload of small aerial systems poses a great challenge to the system design. The optimal tradeoff between flight performance, sensors, and computing resources has to be found. Communication to external computers cannot be guaranteed; therefore, all
Micro air vehicles have become very popular in recent years. Autonomous navigation of such systems plays an important role in many industrial applications as well as in search‐and‐rescue scenarios. We present a quadrotor that performs autonomous navigation in complex indoor and outdoor environments. An operator selects target positions in the onboard map and the system autonomously plans an obstacle‐free path and flies to these locations. An onboard stereo camera and inertial measurement unit are the only sensors. The system is independent of external navigation aids such as GPS. No assumptions are made about the structure of the unknown environment. All navigation tasks are implemented onboard the system. A wireless connection is only used for sending images and a three‐dimensional (3D) map to the operator and to receive target locations. We discuss the hardware and software setup of the system in detail. Highlights of the implementation are the field‐programmable‐gate‐array‐based dense stereo matching of 0.5 Mpixel images at a rate of 14.6 Hz using semiglobal matching, locally drift‐free visual odometry with key frames, and sensor data fusion with compensation of measurement delays of 220 ms. We show the robustness of the approach in simulations and experiments with ground truth. We present the results of a complex, autonomous indoor/outdoor flight and the exploration of a coal mine with obstacle avoidance and 3D mapping.
Flying in unknown environments can lead to unwanted collisions with the environment. If not being accounted for, these may cause serious damage to the robot and/or its environment. Fast and robust collision detection combined with safe reaction is therefore essential in this context. Deliberate physical interaction may also be required in some applications. The robot can then switch into an interaction mode when contact occurs. The control loop must also be designed with interaction in mind. To implement these mechanisms, knowledge of environmental interaction forces is required. In principle, they may be measured or estimated. In this paper, we present a novel model-based method for external wrench estimation in flying robots. The estimation is based on proprioceptive sensors and the robot's dynamics model only. Using this estimate, we also design admittance and impedance controllers for sensitive and robust physical interaction. We also investigate the performance of our collision detection and reaction schemes in order to guarantee collision safety. Upon collision, we determine the collision location and normal located on the robot's geometric model. The method relies on the complete wrench information provided by our scheme. This allows applications such as tactile environment mapping.
The here presented flying system uses two pairs of wide-angle stereo cameras and maps a large area of interest in a short amount of time. We present a multicopter system equipped with two pairs of wide-angle stereo cameras and an inertial measurement unit (IMU) for robust visual-inertial navigation and time-efficient omni-directional 3D mapping. The four cameras cover a 240 degree stereo field of view (FOV) vertically, which makes the system also suitable for cramped and confined environments like caves. In our approach, we synthesize eight virtual pinhole cameras from four wide-angle cameras. Each of the resulting four synthesized pinhole stereo systems provides input to an independent visual odometry (VO). Subsequently, the four individual motion estimates are fused with data from an IMU, based on their consistency with the state estimation. We describe the configuration and image processing of the vision system as well as the sensor fusion and mapping pipeline on board the MAV. We demonstrate the robustness of our multi-VO approach for visual-inertial navigation and present results of a 3D-mapping experiment.
We introduce a prototype flying platform for planetary exploration: autonomous robot design for extraterrestrial applications (ARDEA). Communication with unmanned missions beyond Earth orbit suffers from time delay, thus a key criterion for robotic exploration is a robot's ability to perform tasks without human intervention.
For autonomous operation, all computations should be done on-board and GlobalNavigation Satellite System (GNSS) should not be relied on for navigation purposes.Given these objectives ARDEA is equipped with two pairs of wide-angle stereo cameras and an inertial measurement unit (IMU) for robust visual-inertial navigation and time-efficient, omni-directional 3D mapping. The four cameras cover a 240 ∘ vertical field of view, enabling the system to operate in confined environments such as caves formed by lava tubes. The captured images are split into several pinhole cameras, which are used for simultaneously running visual odometries. The stereo output is used for simultaneous localization and mapping, 3D map generation and collision-free motion planning. To operate the vehicle efficiently for a variety of missions, ARDEA's capabilities have been modularized into skills which can be assembled to fulfill a mission's objectives. These skills are defined generically so that they are independent of the robot configuration, making the approach suitable for different heterogeneous robotic teams. The diverse skill set also makes the micro aerial vehicle (MAV) useful for any task where autonomous exploration is needed.For example terrestrial search and rescue missions where visual navigation in GNSSdenied indoor environments is crucial, such as partially collapsed man-made structures like buildings or tunnels. We have demonstrated the robustness of our system in indoor and outdoor field tests. K E Y W O R D S aerial robotics, computer vision, exploration, GPS-denied operation, planetary robotics ---This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.