Search and rescue (SAR) operations can take significant advantage from supporting autonomous or teleoperated robots and multi-robot systems. These can aid in mapping and situational assessment, monitoring and surveillance, establishing communication networks, or searching for victims. This paper provides a review of multi-robot systems supporting SAR operations, with system-level considerations and focusing on the algorithmic perspectives for multi-robot coordination and perception. This is, to the best of our knowledge, the first survey paper to cover (i) heterogeneous SAR robots in different environments, (ii) active perception in multi-robot systems, while (iii) giving two complementary points of view from the multi-agent perception and control perspectives. We also discuss the most significant open research questions: shared autonomy, sim-to-real transferability of existing methods, awareness of victims' conditions, coordination and interoperability in heterogeneous multi-robot systems, and active perception. The different topics in the survey are put in the context of the different challenges and constraints that various types of robots (ground, aerial, surface, or underwater) encounter in different SAR environments (maritime, urban, wilderness, or other post-disaster scenarios). The objective of this survey is to serve as an entry point to the various aspects of multi-robot SAR systems to researchers in both the machine learning and control fields by giving a global overview of the main approaches being taken in the SAR robotics area.
Remote healthcare monitoring has exponentially grown over the past decade together with the increasing penetration of Internet of Things (IoT) platforms. IoT-based health systems help to improve the quality of healthcare services through real-time data acquisition and processing. However, traditional IoT architectures have some limitations. For instance, they cannot properly function in areas with poor or unstable Internet. Low power wide area network (LPWAN) technologies, including long-range communication protocols such as LoRa, are a potential candidate to overcome the lacking network infrastructure. Nevertheless, LPWANs have limited transmission bandwidth not suitable for high data rate applications such as fall detection systems or electrocardiography monitoring. Therefore, data processing and compression are required at the edge of the network. We propose a system architecture with integrated artificial intelligence that combines Edge and Fog computing, LPWAN technology, IoT and deep learning algorithms to perform health monitoring tasks. In particular, we demonstrate the feasibility and effectiveness of this architecture via a use case of fall detection using recurrent neural networks. We have implemented a fall detection system from the sensor node and Edge gateway to cloud services and end-user applications. The system uses inertial data as input and achieves an average precision of over 90% and an average recall over 95% in fall detection.
The agricultural and farming industries have been widely influenced by the disruption of the Internet of Things. The impact of the IoT is more limited in countries with less penetration of mobile internet such as sub-Saharan countries, where agriculture commonly accounts for 10 to 50% of their GPD. The boom of low-power wide-area networks (LPWAN) in the last decade, with technologies such as LoRa or NB-IoT, has mitigated this providing a relatively cheap infrastructure that enables low-power and long-range transmissions. Nonetheless, the benefits that LPWAN technologies enable have the disadvantage of low-bandwidth transmissions. Therefore, the integration of Edge and Fog computing, moving data analytics and compression near end devices, is key in order to extend functionality. By integrating artificial intelligence at the local network layer, or Edge AI, we present a system architecture and implementation that expands the possibilities of smart agriculture and farming applications with Edge and Fog computing and LPWAN technology for large area coverage. We propose and implement a system consisting on a sensor node, an Edge gateway, LoRa repeaters, Fog gateway, cloud servers and end-user terminal application. At the Edge layer, we propose the implementation of a CNN-based image compression method in order to send in a single message information about hundreds or thousands of sensor nodes within the gateway's range. We use advanced compression techniques to reduce the size of data up to 67% with a decompression error below 5%, within a novel scheme for IoT data.
Small unmanned aerial vehicles (UAV) have penetrated multiple domains over the past years. In GNSS-denied or indoor environments, aerial robots require a robust and stable localization system, often with external feedback, in order to fly safely. Motion capture systems are typically utilized indoors when accurate localization is needed. However, these systems are expensive and most require a fixed setup. Recently, visualinertial odometry and similar methods have advanced to a point where autonomous UAVs can rely on them for localization. The main limitation in this case comes from the environment, as well as in long-term autonomy due to accumulating error if loop closure cannot be performed efficiently. For instance, the impact of low visibility due to dust or smoke in post-disaster scenarios might render the odometry methods inapplicable. In this paper, we study and characterize an ultra-wideband (UWB) system for navigation and localization of aerial robots indoors based on Decawave's DWM1001 UWB node. The system is portable, inexpensive and can be battery powered in its totality. We show the viability of this system for autonomous flight of UAVs, and provide open-source methods and data that enable its widespread application even with movable anchor systems. We characterize the accuracy based on the position of the UAV with respect to the anchors, its altitude and speed, and the distribution of the anchors in space. Finally, we analyze the accuracy of the selfcalibration of the anchors' positions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.