This paper proposes and evaluates a modular architecture of an autonomous unmanned aerial vehicle (UAV) system for search and rescue missions. Multiple multicopters are coordinated using a distributed control system. The system is implemented in the Robot Operating System (ROS) and is capable of providing a real-time video stream from a UAV to one or more base stations using a wireless communications infrastructure. The system supports a heterogeneous set of UAVs and camera sensors. If necessary, an operator can interfere and reduce the autonomy. The system has been tested in an outdoor mission serving as a proof of concept. Some insights from these tests are described in the paper.
The deployment of multiple robots for achieving a common goal helps to improve the performance, efficiency, and/or robustness in a variety of tasks. In particular, the observation of moving targets is an important multirobot application that still exhibits numerous open challenges, including the effective coordination of the robots. This paper reviews control techniques for cooperative mobile robots monitoring multiple targets. The simultaneous movement of robots and targets makes this problem particularly interesting, and our review systematically addresses this cooperative multirobot problem for the first time. We classify and critically discuss the control techniques: cooperative multirobot observation of multiple moving targets, cooperative search, acquisition, and track, cooperative tracking, and multirobot pursuit evasion. We also identify the five major elements that characterize this problem, namely, the coordination method, the environment, the target, the robot and its sensor(s). These elements are used to systematically analyze the control techniques. The majority of the studied work is based on simulation and laboratory studies, which may not accurately reflect real-world operational conditions. Importantly, while our systematic analysis is focused on multitarget observation, our proposed classification is useful also for related multirobot applications.
Advances in control engineering and material science made it possible to develop small-scale unmanned aerial vehicles (UAVs) equipped with cameras and sensors. These UAVs enable us to obtain a bird's eye view of the environment. Having access to an aerial view over large areas is helpful in disaster situations, where often only incomplete and inconsistent information is available to the rescue team. In such situations, airborne cameras and sensors are valuable sources of information helping us to build an ''overview'' of the environment and to assess the current situation.This paper reports on our ongoing research on deploying small-scale, battery-powered and wirelessly connected UAVs carrying cameras for disaster management applications. In this ''aerial sensor network'' several UAVs fly in formations and cooperate to achieve a certain mission. The ultimate goal is to have an aerial imaging system in which UAVs build a flight formation, fly over a disaster area such as wood fire or a large traffic accident, and deliver high-quality sensor data such as images or videos. These images and videos are communicated to the ground, fused, analyzed in real-time, and finally delivered to the user.In this paper we introduce our aerial sensor network and its application in disaster situations. We discuss challenges of such aerial sensor networks and focus on the optimal placement of sensors. We formulate the coverage problem as integer linear program (ILP) and present first evaluation results.
Visual sensor networks (VSNs) are receiving a lot of attention in research, and at the same time, commercial applications are starting to emerge. VSN devices come with image sensors, adequate processing power, and memory. They use wireless communication interfaces to collaborate and jointly solve tasks such as tracking persons within the network. VSNs are expected to replace not only many traditional, closed-circuit surveillance systems but also to enable emerging applications in scenarios such as elderly care, home monitoring, or entertainment. In all of these applications, VSNs monitor a potentially large group of people and record sensitive image data that might contain identities of persons, their behavior, interaction patterns, or personal preferences. These intimate details can be easily abused, for example, to derive personal profiles. The highly sensitive nature of images makes security and privacy in VSNs even more important than in most other sensor and data networks. However, the direct use of security techniques developed for related domains might be misleading due to the different requirements and design challenges. This is especially true for aspects such as data confidentiality and privacy protection against insiders, generating awareness among monitored people, and giving trustworthy feedback about recorded personal data—all of these aspects go beyond what is typically required in other applications. In this survey, we present an overview of the characteristics of VSN applications, the involved security threats and attack scenarios, and the major security challenges. A central contribution of this survey is our classification of VSN security aspects into data-centric, node-centric, network-centric, and user-centric security. We identify and discuss the individual security requirements and present a profound overview of related work for each class. We then discuss privacy protection techniques and identify recent trends in VSN security and privacy. A discussion of open research issues concludes this survey.
In this paper we present an approach to object tracking handover in a network of smart cameras, based on self-interested autonomous agents, which exchange responsibility for tracking objects in a market mechanism, in order to maximise their own utility. A novel ant-colony inspired mechanism is used to learn the vision graph, i.e., the camera neighbourhood relations, during runtime, which may then be used to optimise communication between cameras. The key benefits of our completely decentralised approach are on the one hand generating the vision graph online, enabling efficient deployment in unknown scenarios and camera network topologies, and on the other hand relying only on local information, increasing the robustness of the system. Since our market-based approach does not rely on a priori topology information, the need for any multi-camera calibration can be avoided. We have evaluated our approach both in a simulation study and in network of real distributed smart cameras.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.