One of the most challenging problems in the domain of autonomous aerial vehicles is the designing of a robust real-time obstacle detection and avoidance system. This problem is complex, especially for the micro and small aerial vehicles, that is due to the Size, Weight and Power (SWaP) constraints. Therefore, using lightweight sensors (i.e., Digital camera) can be the best choice comparing with other sensors; such as laser or radar.For real-time applications, different works are based on stereo cameras in order to obtain a 3D model of the obstacles, or to estimate their depth. Instead, in this paper, a method that mimics the human behavior of detecting the collision state of the approaching obstacles using monocular camera is proposed. The key of the proposed algorithm is to analyze the size changes of the detected feature points, combined with the expansion ratios of the convex hull constructed around the detected feature points from consecutive frames. During the Aerial Vehicle (UAV) motion, the detection algorithm estimates the changes in the size of the area of the approaching obstacles. First, the method detects the feature points of the obstacles, then extracts the obstacles that have the probability of getting close toward the UAV. Secondly, by comparing the area ratio of the obstacle and the position of the UAV, the method decides if the detected obstacle may cause a collision. Finally, by estimating the obstacle 2D position in the image and combining with the tracked waypoints, the UAV performs the avoidance maneuver. The proposed algorithm was evaluated by performing real indoor and outdoor flights, and the obtained results show the accuracy of the proposed algorithm compared with other related works.
The development in Multi-Robot Systems (MRS) has become one of the most exploited fields of research in robotics in recent years. This is due to the robustness and versatility they present to effectively undertake a set of tasks autonomously. One of the essential elements for several vehicles, in this case, Unmanned Aerial Vehicles (UAVs), to perform tasks autonomously and cooperatively is trajectory planning, which is necessary to guarantee the safe and collision-free movement of the different vehicles. This document includes the planning of multiple trajectories for a swarm of UAVs based on 3D Probabilistic Roadmaps (PRM). This swarm is capable of reaching different locations of interest in different cases (labeled and unlabeled), supporting of an Emergency Response Team (ERT) in emergencies in urban environments. In addition, an architecture based on Robot Operating System (ROS) is presented to allow the simulation and integration of the methods developed in a UAV swarm. This architecture allows the communications with the MavLink protocol and control via the Pixhawk autopilot, for a quick and easy implementation in real UAVs. The proposed method was validated by experiments simulating building emergences. Finally, the obtained results show that methods based on probability roadmaps create effective solutions in terms of calculation time in the case of scalable systems in different situations along with their integration into a versatile framework such as ROS.
The automation of the Wilderness Search and Rescue (WiSAR) task aims for high levels of understanding of various scenery. In addition, working in unfriendly and complex environments may cause a time delay in the operation and consequently put human lives at stake. In order to address this problem, Unmanned Aerial Vehicles (UAVs), which provide potential support to the conventional methods, are used. These vehicles are provided with reliable human detection and tracking algorithms; in order to be able to find and track the bodies of the victims in complex environments, and a robust control system to maintain safe distances from the detected bodies. In this paper, a human detection based on the color and depth data captured from onboard sensors is proposed. Moreover, the proposal of computing data association from the skeleton pose and a visual appearance measurement allows the tracking of multiple people with invariance to the scale, translation and rotation of the point of view with respect to the target objects. The system has been validated with real and simulation experiments, and the obtained results show the ability to track multiple individuals even after long-term disappearances. Furthermore, the simulations present the robustness of the implemented reactive control system as a promising tool for assisting the pilot to perform approaching maneuvers in a safe and smooth manner.
This is a postprint version of the following published document: Martín, D., et al. (2014)
a b s t r a c tThis paper presents the IVVI 2.0 a smart research platform to foster intelligent systems in vehicles. Com putational perception in intelligent transportation systems applications has advantages, such as huge data from vehicle environment, among others, so computer vision systems and laser scanners are the main devices that accomplish this task. Both have been integrated in our intelligent vehicle to develop cutting edge applications to cope with perception difficulties, data processing algorithms, expert knowl edge, and decision making. The long term in vehicle applications, that are presented in this paper, out perform the most significant and fundamental technical limitations, such as, robustness in the face of changing environmental conditions. Our intelligent vehicle operates outdoors with pedestrians and oth ers vehicles, and outperforms illumination variation, i.e.: shadows, low lighting conditions, night vision, among others. So, our applications ensure the suitable robustness and safety in case of a large variety of lighting conditions and complex perception tasks. Some of these complex tasks are overcome by the improvement of other devices, such as, inertial measurement units or differential global positioning sys tems, or perception architectures that accomplish sensor fusion processes in an efficient and safe manner. Both extra devices and architectures enhance the accuracy of computational perception and outreach the properties of each device separately.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.