This paper presents a novel control strategy, which we call optiPilot, for autonomous flight in the vicinity of obstacles. Most existing autopilots rely on a complete 6-degree-of-freedom state estimation using a GPS and an Inertial Measurement Unit (IMU) and are unable to detect and avoid obstacles. This is a limitation for missions such as surveillance and environment monitoring that may require near-obstacle flight in urban areas or mountainous environments. OptiPilot instead uses optic flow to estimate proximity of obstacles and avoid them.Our approach takes advantage of the fact that, for most platforms in translational flight (as opposed to near-hover flight), the translatory motion is essentially aligned with the aircraft main axis. This property allows us to directly interpret optic flow measurements as proximity indications. We take inspiration from neural and behavioural strategies of flying insects to propose a simple mapping of optic flow measurements into control signals that requires only a lightweight and power-efficient sensor suite and minimal processing power.In this paper, we first describe results obtained in simulation before presenting the implementation of optiPilot on a real flying platform equipped only with lightweight and inexpensive optic computer mouse sensors, MEMS rate gyroscopes and a pressure-based airspeed sensor. We show that the proposed control strategy not only allows collision-free flight in the vicinity of obstacles, but is also able to stabilise both attitude and altitude over flat terrain. These re-
ABSTRACT:This paper presents an affordable, fully automated and accurate mapping solutions based on ultra-light UAV imagery. Several datasets are analysed and their accuracy is estimated. We show that the accuracy highly depends on the ground resolution (flying height) of the input imagery. When chosen appropriately this mapping solution can compete with traditional mapping solutions that capture fewer high-resolution images from airplanes and that rely on highly accurate orientation and positioning sensors on board. Due to the careful integration with recent computer vision techniques, the post processing is robust and fully automatic and can deal with inaccurate position and orientation information which are typically problematic with traditional techniques.
Because of their ability to naturally float in the air, indoor airships (often called blimps) constitute an appealing platform for research in aerial robotics. However, when confronted to long lasting experiments such as those involving learning or evolutionary techniques, blimps present the disadvantage that they cannot be linked to external power sources and tend to have little mechanical resistance due to their low weight budget. One solution to this problem is to use a realistic flight simulator, which can also significantly reduce experimental duration by running faster than real time. This requires an efficient physical dynamic modelling and parameter identification procedure, which are complicated to develop and usually rely on costly facilities such as wind tunnels. In this paper, we present a simple and efficient physics-based dynamic modelling of indoor airships including a pragmatic methodology for parameter identification without the need for complex or costly test facilities. Our approach is tested with an existing blimp in a vision-based navigation task. Neuronal controllers are evolved in simulation to map visual input into motor commands in order to steer the flying robot forward as fast as possible while avoiding collisions. After evolution, the best individuals are successfully transferred to the physical blimp, which experimentally demonstrates the efficiency of the proposed approach.
Abstract-Fully autonomous control of ultra-light indoor airplanes has not yet been achieved because of the strong limitations on the kind of sensors that can be embedded making it difficult to obtain good estimations of altitude. We propose to revisit altitude control by considering it as an obstacle avoidance problem and introduce a novel control scheme where the ground and ceiling is avoided based on translatory optic flow, in a way similar to existing vision-based wall avoidance strategies.We show that this strategy is successful at controlling a simulated microflyer without any explicit altitude estimation and using only simple sensors and processing that have already been embedded in an existing 10-gram microflyer. This result is thus a significant step toward autonomous control of indoor flying robots.
We aim at developing ultralight autonomous microflyers capable of freely flying within houses or small built environments while avoiding collisions. Our latest prototype is a fixed-wing aircraft weighing a mere 10 g, flying around 1.5 m/s and carrying the necessary electronics for airspeed regulation and lateral collision avoidance. This microflyer is equipped with two tiny camera modules, two rate gyroscopes, an anemometer, a small microcontroller, and a Bluetooth radio module. Inflight tests are carried out in a new experimentation room specifically designed for easy changing of surrounding textures.keywords: indoor flying robot, vision-based navigation, collision avoidance, optic flow.
The ability to fly at low altitude while actively avoiding collisions with the terrain and objects such as trees and buildings is a great challenge for small unmanned aircraft. This paper builds on top of a control strategy called optiPilot whereby a series of optic-flow detectors pointed at divergent viewing directions around the aircraft main axis are linearly combined into roll and pitch commands using two sets of weights. This control strategy already proved successful at controlling flight and avoiding collisions in reactive navigation experiments. This paper describes how optiPilot can efficiently steer a flying platform during the critical phases of hand-launched take off and landing. It then shows how optiPilot can be coupled with a GPS in order to provide goal-directed, nap-of-the-earth flight control in presence of obstacles. Two fully autonomous flights of 25 minutes each are described where a 400-gram unmanned aircraft flies at approx. 10 m above ground in a circular path including two copses of trees requiring efficient collision avoidance actions. INTRODUCTIONSmall unmanned aircraft capable of fully autonomous flight at low altitude while avoiding obstacles is not only interesting for military, but could be of great help in many civilian applications as well. One can think of ultra-low-altitude imagery to construct 2D or 3D maps with unprecedented resolution and realism; measuring air quality in urban environments to better understand pollution spreading and alert the population only when and where necessary; measuring radio signal strength in order to determine the coverage of mobile telephony antenna or network access points; search for lost people; transport small parcels across a city; etc. Commercially available miniature autopilots solve the problem of flight stabilization and way-point navigation in free spaces using GPS+IMU 1 , but they offer no practical solution to cope with obstacles.On the research side, we find two kinds of approaches attempting at solving this problem. The first one consists of relying on classical GPS+IMU autopilots [3] and to add sensors that scan the environment in order to feedback path corrections into the autopilot, which is at the core of the navigation process [4,5,6]. These methods tend to be heavy and computationally intensive because a 3D map of the environment needs to be maintained in real-time and the sensors used to perceive depth need to be highly accurate in order for the algorithms to converge. In addition, these methods would be difficult to use in the take-off and landing phases where the distances to obstacles are so small that there is only time for reactive manoeuvers. This paper presents an alternative approach that consists of solving the collision-avoidance problem even before adding goal-directed navigation ability on top. Following the behaviour-based approach [7], the idea is to develop a system that can wander around without hitting objects before adding a navigation layer on top of it.In order to ensure the low-level collision-av...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.