The recent technological advances in Micro Aerial Vehicles (MAVs) have triggered great interest in the robotics community, as their deployability in missions of surveillance and reconnaissance has now become a realistic prospect. The state of the art, however, still lacks solutions that can work for a long duration in large, unknown, and GPS‐denied environments. Here, we present our visual pipeline and MAV state‐estimation framework, which uses feeds from a monocular camera and an Inertial Measurement Unit (IMU) to achieve real‐time and onboard autonomous flight in general and realistic scenarios. The challenge lies in dealing with the power and weight restrictions onboard a MAV while providing the robustness necessary in real and long‐term missions. This article provides a concise summary of our work on achieving the first onboard vision‐based power‐on‐and‐go system for autonomous MAV flights. We discuss our insights on the lessons learned throughout the different stages of this research, from the conception of the idea to the thorough theoretical analysis of the proposed framework and, finally, the real‐world implementation and deployment. Looking into the onboard estimation of monocular visual odometry, the sensor fusion strategy, the state estimation and self‐calibration of the system, and finally some implementation issues, the reader is guided through the different modules comprising our framework. The validity and power of this framework are illustrated via a comprehensive set of experiments in a large outdoor mission, demonstrating successful operation over flights of more than 360 m trajectory and 70 m altitude change.
In this paper, we present our latest achievements towards the goal of autonomous flights of an MAV in unknown environments, only having a monocular camera as exteroceptive sensor. As MAVs are highly agile, it is not sufficient to directly use the visual input for position control at the framerates that can be achieved with small onboard computers. Our contributions in this work are twofold. First, we present a solution to overcome the issue of having a low frequent onboard visual pose update versus the high agility of an MAV. This is solved by filtering visual information with inputs from inertial sensors. Second, as our system is based on monocular vision, we present a solution to estimate the metric visual scale aid of an air pressure sensor. All computation is running onboard and is tightly integrated on the MAV to avoid jitter and latencies. This framework enables stable flights indoors and outdoors even under windy conditions.
Abstract-We describe an efficient, reliable, and robust fourrotor flying platform for indoor and outdoor navigation. Currently, similar platforms are controlled at low frequencies due to hardware and software limitations. This causes uncertainty in position control and instable behavior during fast maneuvers. Our flying platform offers a 1 kHz control frequency and motor update rate, in combination with powerful brushless DC motors in a light-weight package. Following a minimalistic design approach this system is based on a small number of lowcost components. Its robust performance is achieved by using simple but reliable highly optimized algorithms. The robot is small, light, and can carry payloads of up to 350g.
During the last years Quadrotor Helicopters have become very popular for a variety of applications. In this paper, we discuss the effects of having more than four rotors on such a Micro Aerial Vehicle (MAV). In particular, we address the influence of the number of rotors on achievable dynamics, efficiency and the possibility of having redundancy in case of motor failures. Furthermore, we consider different airframe designs with different arrangements of a certain number of rotors. This discussion leads to a six rotor helicopter with a hexagonal configuration, ensuring redundancy due to a novel control concept. We present the overall concept of this new MAV, as well as control strategies handling redundancy situations.
Abstract-The SFly project is an EU-funded project, with the goal to create a swarm of autonomous vision controlled micro aerial vehicles. The mission in mind is that a swarm of MAV's autonomously maps out an unknown environment, computes optimal surveillance positions and places the MAV's there and then locates radio beacons in this environment. The scope of the work includes contributions on multiple different levels ranging from theoretical foundations to hardware design and embedded programming. One of the contributions is the development of a new MAV, a hexacopter, equipped with enough processing power for onboard computer vision. A major contribution is the development of monocular visual SLAM that runs in real-time onboard of the MAV. The visual SLAM results are fused with IMU measurements and are used to stabilize and control the MAV. This enables autonomous flight of the MAV, without the need of a data link to a ground station. Within this scope novel analytical solutions for fusing IMU and vision measurements have been derived. In addition to the realtime local SLAM, an offline dense mapping process has been developed. For this the MAV's are equipped with a payload of a stereo camera system. The dense environment map is used to compute optimal surveillance positions for a swarm of MAV's. For this an optimiziation technique based on cognitive adaptive optimization has been developed. Finally, the MAV's have been equipped with radio transceivers and a method has been developed to locate radio beacons in the observed environment. I. EXTENDED SUMMARYThe goal of the SFly project [1] was to create a swarm of autonomous Micro Aerial Vehicles (MAV's) for applications in search and rescue missions. This video (additional videos are available on the SFly Youtube Channel [2]) demonstrates the use of the developed MAV's in a simulated disaster response situation. In the demonstrated mission, the MAV's are used to provide an aerial overview of the disaster scene and to locate victims.In the first step of the mission a swarm of 3 MAV's autonomously explores the environment and captures aerial image data which is used to compute a 3D model of the (Fig. 1). The SFly MAV (developed by the project partner Ascending Technologies) consists of a hexacopter base with a diameter of around 55cm [16]. It is equipped with an IMU for attitude control as well as pressure sensor and GPS. A highlight is the onboard computer, an Intel Core2Duo, which is powerful enough to do the real-time image processing of the onboard cameras. The MAV is equipped with a downward looking monocular camera which is used for flight control and a configurable stereo setup that can be either used for mapping or obstacle detection. The weight of the system is 1.5kg.For flight control a local visual SLAM algorithm is running onboard and in real-time using the downward looking monocular camera. This allows stable hovering and also takeoff and landing maneuvers [17], [5], [12]. The full state of the MAV is computed by fusing the visual SLAM poses with measu...
Adaptive control of unmanned aerial vehicles has gained recent interest in the field of flight control. Control algorithms seek to provide robustness in the presence of uncertain parameters, unmodeled dynamics, external disturbances or failure situations. As adaptive control algorithms are a priori designed to account for uncertain system dynamics and determine the system parameters online they provide a promising approach to improve the robustness of the control system w.r.t. parameter uncertainties. In this paper, we present an adaptive attitude controller for a quadcopter utilizing the full dynamic bandwidth of the system. The concept of Model Reference Adaptive Control is used in combination with a nonlinear control structure based on the method of nonlinear, dynamic inversion. Standard robustness modifications are used and adapted to the specific application on the quadcopter in order to ensure long term stability and robustness against unmodeled dynamics as well as external disturbances without persistent excitation. The focus is fast and robust adaption, so that even complete resets of the adaptive system in flight are possible. Further issues like unbounded growth of adaptive gains or integrator wind-ups due to actuator limitations are accounted for in the control structure and are successfully prevented. A small quadcopter is used as experimental platform, which enables the authors to perform real flight experiments without the need for expensive flight tests on larger systems. Therefore, all algorithms are optimized to run at high update rates on the onboard microprocessor hardware. The fast update rates of 1 kHz of the control loops are one key feature to achieve the high performance of the system. The tools based on MATLAB/Simulink to design the control system, the implementation and the optimization for the onboard hardware are presented as well as the quadcopter itself. Experimental results prove that a highly adaptive control system is able to handle a wide variety of external disturbances or parameter changes. To show the capabilities and verify the controller design, flight test results are presented for the following three extreme failure and uncertainty conditions: 1. Simulated power loss of a certain motor; 2. Disturbance due to external weight hung on a quadrocopter arm and cut off during flight; 3. Complete gain resets to zero during flight. The experimental results show that the adaptive controller can adjust fast enough to maintain stability and restore a desired transient performance under these adverse conditions. The presented adaptive control system and the implementation on the quadcopter and its microprocessor hardware using the simple MATLAB/Simulink framework is a starting point for ongoing research and development of adaptive algorithms on Micro Aerial Vehicles. It enables one to perform low cost validation of control algorithms in real flight experiments without the need for intense knowledge of programming languages or hardware design.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.