The present paper describes a vision-based simultaneous localization and mapping system to be applied to Unmanned Aerial Vehicles (UAVs). The main contribution of this work is to propose a novel estimator relying on an Extended Kalman Filter. The estimator is designed in order to fuse the measurements obtained from: (i) an orientation sensor (AHRS); (ii) a position sensor (GPS); and (iii) a monocular camera. The estimated state consists of the full state of the vehicle: position and orientation and their first derivatives, as well as the location of the landmarks observed by the camera. The position sensor will be used only during the initialization period in order to recover the metric scale of the world. Afterwards, the estimated map of landmarks will be used to perform a fully vision-based navigation when the position sensor is not available. Experimental results obtained with simulations and real data show the benefits of the inclusion of camera measurements into the system. In this sense the estimation of the trajectory of the vehicle is considerably improved, compared with the estimates obtained using only the measurements from the position sensor, which are commonly low-rated and highly noisy.
This work presents a method for estimating the model parameters of multi-rotor unmanned aerial vehicles by means of an extended Kalman filter. Different from test-bed based identification methods, the proposed approach estimates all the model parameters of a multi-rotor aerial vehicle, using a single online estimation process that integrates measurements that can be obtained directly from onboard sensors commonly available in this kind of UAV. In order to develop the proposed method, the observability property of the system is investigated by means of a nonlinear observability analysis. First, the dynamic models of three classes of multi-rotor aerial vehicles are presented. Then, in order to carry out the observability analysis, the state vector is augmented by considering the parameters to be identified as state variables with zero dynamics. From the analysis, the sets of measurements from which the model parameters can be estimated are derived. Furthermore, the necessary conditions that must be satisfied in order to obtain the observability results are given. An extensive set of computer simulations is carried out in order to validate the proposed method. According to the simulation results, it is feasible to estimate all the model parameters of a multi-rotor aerial vehicle in a single estimation process by means of an extended Kalman filter that is updated with measurements obtained directly from the onboard sensors. Furthermore, in order to better validate the proposed method, the model parameters of a custom-built quadrotor were estimated from actual flight log data. The experimental results show that the proposed method is suitable to be practically applied.
Using a camera, a micro aerial vehicle (MAV) can perform visual-based navigation in periods or circumstances when GPS is not available, or when it is partially available. In this context, the monocular simultaneous localization and mapping (SLAM) methods represent an excellent alternative, due to several limitations regarding to the design of the platform, mobility and payload capacity that impose considerable restrictions on the available computational and sensing resources of the MAV. However, the use of monocular vision introduces some technical difficulties as the impossibility of directly recovering the metric scale of the world. In this work, a novel monocular SLAM system with application to MAVs is proposed. The sensory input is taken from a monocular downward facing camera, an ultrasonic range finder and a barometer. The proposed method is based on the theoretical findings obtained from an observability analysis. Experimental results with real data confirm those theoretical findings and show that the proposed method is capable of providing good results with low-cost hardware.Peer ReviewedPostprint (published version
A typical navigation system for Micro Aerial Vehicles (MAV) relies basically on GPS for position estimation. However, for several kinds of applications, the precision of the GPS is inappropriate or even its signal can be unavailable. In this context, and due to its flexibility, Monocular Simultaneous Localization and Mapping (SLAM) methods have become a good alternative for implementing visual-based navigation systems for MAVs that must operate in GPS-denied environments. On the other hand, one of the most important challenges that arises with the use of the monocular vision is the difficulty to recover the metric scale of the world.In this work, a monocular SLAM system for MAVs is presented. In order to overcome the problem of the metric scale, a novel technique for inferring the approximate depth of visual features from an ultrasonic range-finder is developed. Additionally, the altitude of the vehicle is updated using the pressure measurements of a barometer. The proposed approach is supported by the theoretical results obtained from a nonlinear observability test. Experiments performed with both computer simulations and real data are presented in order to validate the performance of the proposal. The results confirm the theoretical findings and show that the method is able to work with low-cost sensors.
State estimation is a fundamental necessity for any application involving autonomous robots. This paper describes a visual-aided inertial navigation and mapping system for application to autonomous robots. The system, which relies on Kalman filtering, is designed to fuse the measurements obtained from a monocular camera, an inertial measurement unit (IMU) and a position sensor (GPS). The estimated state consists of the full state of the vehicle: the position, orientation, their first derivatives and the parameter errors of the inertial sensors (i.e., the bias of gyroscopes and accelerometers). The system also provides the spatial locations of the visual features observed by the camera. The proposed scheme was designed by considering the limited resources commonly available in small mobile robots, while it is intended to be applied to cluttered environments in order to perform fully vision-based navigation in periods where the position sensor is not available. Moreover, the estimated map of visual features would be suitable for multiple tasks: i) terrain analysis; ii) three-dimensional (3D) scene reconstruction; iii) localization, detection or perception of obstacles and generating trajectories to navigate around these obstacles; and iv) autonomous exploration. In this work, simulations and experiments with real data are presented in order to validate and demonstrate the performance of the proposal.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.