We are interested in the problem of surveilling and ex ploring environments that include both indoor and outdoor settings. Aerial vehicles offer mobility and perspective ad vantages over ground platforms and micro aerial vehicles (MAYs) are particularly applicable to buildings with multiple floors where stairwells can be an obstacle to ground vehicles. A challenge when operating in indoor environments is the lack of an external source of localization such as GPS. For these reasons, in this work we focus on autonomous navigation in buildings with multiple floors without requiring an external source of localization or prior knowledge of the environment. To ensure that the robot is fully autonomous, we require all computation to occur on the robot without need for external infrastructure, communication, or human interaction beyond high-level commands. Therefore, we pur sue a system design and methodology capable of autonomous navigation with real-time performance on a mobile processor using only onboard sensors (Fig. 1); where in this work autonomous navigation considers multi-floor mapping with loop closure, localization, planning, and control.We note that the topic of autonomous navigation with a MAY is addressed by others in the community with some similarities in approach and methodology. Relevant to this paper is the work of Bachrach et al. [1,2], Grzonka et al. [3], and Blosch et al. [4] with results toward online autonomous navigation and exploration with an aerial vehicle. The major points of differentiation between existing results and our work are threefold. First, all the processing is done onboard requiring algorithms that lend themselves to real-time com putation on a small processor. Second, we consider multi floor operation with loop closure. Third, we design adaptive controllers to compensate for external aerodynamic effects which would otherwise prohibit operation in constrained environments.
II. MET HOD OL OGY AND RELATED LITERATUREThe discussion follows the logical flow of the system design (Fig. 2). The six degree-of-freedom (DOF) pose of the robot is defined by its 3D position, roll, pitch, and yaw Euler angles, {x, y, z, ¢;, e, "p}. S. Shen, N. Michael, and V. Kumar are with the GRASP Laboratory, Fig. I. The experimental platform with onboard computation (1.6 GHz Atom processor) and sensing (laser, camera, and IMU).
A. Pose EstimationA scanning laser range sensor retrofitted with mirrors for beam redirection to the floor and ceiling serves as a primary source of information for position and yaw estimation. We evaluated several laser-based methods for pose estimation such as exhaustive search [5] and feature-based approaches [6,7]. However, because our limited onboard computational resources, we chose the Iterative Closest Point (lCP) algo rithm [8], which yields a robust and inexpensive continuous pose estimate. We make use of a grid based search [9] to speed up the computationally expensive closest point search in ICP. The result of the ICP algorithm is an estimate of {x, y, "p}. The algorit...