The information generated by a computer vision system capable of labelling a land surface as water, vegetation, soil or other type, can be used for mapping and decision making. For example, an unmanned aerial vehicle (UAV) can use it to find a suitable landing position or to cooperate with other robots to navigate across an unknown region. Previous works on terrain classification from RGB images taken onboard of UAVs shown that only static pixel-based features were tested with a considerable classification error. This paper proposes a robust and efficient computer vision algorithm capable of classifying the terrain from RGB images with improved accuracy. The algorithm complement the static image features with dynamic texture patterns produced by UAVs rotors downwash effect (visible at lower altitudes) and machine learning methods to classify the underlying terrain. The system is validated using videos acquired onboard of a UAV.
This work addresses the problem of unmanned aerial vehicle (UAV) navigation in indoor environments. Due to unavailability of satellite signals, the proposed algorithm takes advantage of terrestrial radio measurements between the UAV and a set of stationary reference points, from which it extracts range information, as well as odometry by means of inertial sensors, such as accelerometer. On the one hand, based on maximum a posteriori (MAP) criterion, the range information and accumulated knowledge throughout the UAV's movement are employed to derive a generalized trust region sub-problem (GTRS), that is solved exactly via bisection procedure. On the other hand, by using the UAV's transform in relation to the world, another position estimation is obtained by employing odometry. Finally, the two position estimates are combined through a Kalman filter (KF) to enhance the positioning accuracy and obtain the final UAV's position estimation. The UAV is then navigated to a desired destination, by simply calculating the velocity components in the shortest path. Our results show that the proposed algorithm is robust to various model parameters for high precision (HP) UAV sensors, achieving reasonably good positioning accuracy. Besides, the results corroborate that the proposed algorithm is suitable for real-time applications, consuming (on average) only 21 ms to estimate the UAV position.INDEX TERMS Generalized trust region sub-problem (GTRS), indoor environments, Kalman filter (KF), maximum a posteriori (MAP) estimator, navigation, odometry, positioning, unmanned aerial vehicle (UAV).
Unmanned Aerial Vehicles (UAVs), although hardly a new technology, have recently gained a prominent role in many industries being widely used not only among enthusiastic consumers, but also in high demanding professional situations, and will have a massive societal impact over the coming years. However, the operation of UAVs is fraught with serious safety risks, such as collisions with dynamic obstacles (birds, other UAVs, or randomly thrown objects). These collision scenarios are complex to analyze in real-time, sometimes being computationally impossible to solve with existing State of the Art (SoA) algorithms, making the use of UAVs an operational hazard and therefore significantly reducing their commercial applicability in urban environments. In this work, a conceptual framework for both stand-alone and swarm (networked) UAVs is introduced, with a focus on the architectural requirements of the collision avoidance subsystem to achieve acceptable levels of safety and reliability. The SoA principles for collision avoidance against stationary objects are reviewed and a novel approach is described, using deep learning techniques to solve the computational intensive problem of real-time collision avoidance with dynamic objects. The proposed framework includes a web-interface allowing the full control of UAVs as remote clients with a supervisor cloud-based platform. The feasibility of the proposed approach was demonstrated through experimental tests using a UAV, developed from scratch using the proposed framework. Test flight results are presented for an autonomous UAV monitored from multiple countries across the world.
Knowing how to identify terrain types is especially important in the autonomous navigation, mapping, decision making and detect landings areas. A recent area is in cooperation and improvement of autonomous behavior between robots. For example, an unmanned aerial vehicle (UAV) is used to identify a possible landing area or used in cooperation with other robots to navigate in unknown terrains. This paper presents a computer vision algorithm capable of identifying the terrain type where the UAV is flying, using its rotors' downwash effect. The algorithm is a fusion between the frequency Wiener-Khinchin adapted and spatial Empirical Mode Decomposition (EMD) domains. In order to increase certainty in terrain identification, machine learning is also used. The system is validated using videos acquired onboard of a UAV with an RGB camera.
Unmanned Autonomous Vehicles (UAV), while not a recent invention, have recently acquired a prominent position in many industries, and they are increasingly used not only by avid customers, but also in high-demand technical use-cases, and will have a significant societal effect in the coming years. However, the use of UAVs is fraught with significant safety threats, such as collisions with dynamic obstacles (other UAVs, birds, or randomly thrown objects). This research focuses on a safety problem that is often overlooked due to a lack of technology and solutions to address it: collisions with non-stationary objects. A novel approach is described that employs deep learning techniques to solve the computationally intensive problem of real-time collision avoidance with dynamic objects using off-the-shelf commercial vision sensors. The suggested approach’s viability was corroborated by multiple experiments, firstly in simulation, and afterward in a concrete real-world case, that consists of dodging a thrown ball. A novel video dataset was created and made available for this purpose, and transfer learning was also tested, with positive results.
Quality control is an area of utmost importance for fabric production companies. By not detecting the defects present in the fabrics, companies are at risk of losing money and reputation with a damaged product. In a traditional system, an inspection accuracy of 60-75% is observed. In order to reduce these costs, a fast and automatic defect detection system, which can be complemented with the operator decision, is proposed in this paper. To perform the task of defect detection, a custom Convolutional Neural Network (CNN) was used in this work. To obtain a well-generalized system, in the training process, more than 50 defect types were used. Additionally, as an undetected defect (False Negative -FN) usually has a higher cost to the company than a non-defective fabric being classified as a defective one (false positive), FN reduction methods were used in the proposed system. In testing, when the system was in automatic mode, an average accuracy of 75% was attained; however, if the FN reduction method was applied, with intervention of the operator, an average of 95% accuracy can be achieved. These results demonstrate the ability of the system to detect many different types of defects with good accuracy whilst being faster and computationally simple.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.