The information generated by a computer vision system capable of labelling a land surface as water, vegetation, soil or other type, can be used for mapping and decision making. For example, an unmanned aerial vehicle (UAV) can use it to find a suitable landing position or to cooperate with other robots to navigate across an unknown region. Previous works on terrain classification from RGB images taken onboard of UAVs shown that only static pixel-based features were tested with a considerable classification error. This paper proposes a robust and efficient computer vision algorithm capable of classifying the terrain from RGB images with improved accuracy. The algorithm complement the static image features with dynamic texture patterns produced by UAVs rotors downwash effect (visible at lower altitudes) and machine learning methods to classify the underlying terrain. The system is validated using videos acquired onboard of a UAV.
This work addresses the problem of unmanned aerial vehicle (UAV) navigation in indoor environments. Due to unavailability of satellite signals, the proposed algorithm takes advantage of terrestrial radio measurements between the UAV and a set of stationary reference points, from which it extracts range information, as well as odometry by means of inertial sensors, such as accelerometer. On the one hand, based on maximum a posteriori (MAP) criterion, the range information and accumulated knowledge throughout the UAV's movement are employed to derive a generalized trust region sub-problem (GTRS), that is solved exactly via bisection procedure. On the other hand, by using the UAV's transform in relation to the world, another position estimation is obtained by employing odometry. Finally, the two position estimates are combined through a Kalman filter (KF) to enhance the positioning accuracy and obtain the final UAV's position estimation. The UAV is then navigated to a desired destination, by simply calculating the velocity components in the shortest path. Our results show that the proposed algorithm is robust to various model parameters for high precision (HP) UAV sensors, achieving reasonably good positioning accuracy. Besides, the results corroborate that the proposed algorithm is suitable for real-time applications, consuming (on average) only 21 ms to estimate the UAV position.INDEX TERMS Generalized trust region sub-problem (GTRS), indoor environments, Kalman filter (KF), maximum a posteriori (MAP) estimator, navigation, odometry, positioning, unmanned aerial vehicle (UAV).
Unmanned Aerial Vehicles (UAVs), although hardly a new technology, have recently gained a prominent role in many industries being widely used not only among enthusiastic consumers, but also in high demanding professional situations, and will have a massive societal impact over the coming years. However, the operation of UAVs is fraught with serious safety risks, such as collisions with dynamic obstacles (birds, other UAVs, or randomly thrown objects). These collision scenarios are complex to analyze in real-time, sometimes being computationally impossible to solve with existing State of the Art (SoA) algorithms, making the use of UAVs an operational hazard and therefore significantly reducing their commercial applicability in urban environments. In this work, a conceptual framework for both stand-alone and swarm (networked) UAVs is introduced, with a focus on the architectural requirements of the collision avoidance subsystem to achieve acceptable levels of safety and reliability. The SoA principles for collision avoidance against stationary objects are reviewed and a novel approach is described, using deep learning techniques to solve the computational intensive problem of real-time collision avoidance with dynamic objects. The proposed framework includes a web-interface allowing the full control of UAVs as remote clients with a supervisor cloud-based platform. The feasibility of the proposed approach was demonstrated through experimental tests using a UAV, developed from scratch using the proposed framework. Test flight results are presented for an autonomous UAV monitored from multiple countries across the world.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.