Radio link quality estimation in Wireless Sensor Networks (WSNs) has a fundamental impact on the network performance and also affects the design of higher-layer protocols. Therefore, for about a decade, it has been attracting a vast array of research works. Reported works on link quality estimation are typically based on different assumptions, consider different scenarios, and provide radically different (and sometimes contradictory) results. This article provides a comprehensive survey on related literature, covering the characteristics of low-power links, the fundamental concepts of link quality estimation in WSNs, a taxonomy of existing link quality estimators, and their performance analysis. To the best of our knowledge, this is the first survey tackling in detail link quality estimation in WSNs. We believe our efforts will serve as a reference to orient researchers and system designers in this area.
O PDF relativo ao artigo que solicita, não se encontra disponível em Acesso Aberto.
Motivos: O editor não permite o depósito e disponibilização em acesso aberto do PDF que solicita. Para consultar o documento deve aceder ao endereço do editor.
Unmanned Aerial Vehicles are increasingly being used in surveillance and traffic monitoring thanks to their high mobility and ability to cover areas at different altitudes and locations. One of the major challenges is to use aerial images to accurately detect cars and count-them in real-time for traffic monitoring purposes. Several deep learning techniques were recently proposed based on convolution neural network (CNN) for real-time classification and recognition in computer vision. However, their performance depends on the scenarios where they are used. In this paper, we investigate the performance of two state-of-the art CNN algorithms, namely Faster R-CNN and YOLOv3, in the context of car detection from aerial images. We trained and tested these two models on a large car dataset taken from UAVs. We demonstrated in this paper that YOLOv3 outperforms Faster R-CNN in sensitivity and processing time, although they are comparable in the precision metric.
Segmenting aerial images is being of great potential in surveillance and scene understanding of urban areas. It provides a mean for automatic reporting of the different events that happen in inhabited areas. This remarkably promotes public safety and traffic management applications. After the wide adoption of convolutional neural networks methods, the accuracy of semantic segmentation algorithms could easily surpass 80% if a robust dataset is provided. Despite this success, the deployment of a pre-trained segmentation model to survey a new city that is not included in the training set significantly decreases the accuracy. This is due to the domain shift between the source dataset on which the model is trained and the new target domain of the new city images. In this paper, we address this issue and consider the challenge of domain adaptation in semantic segmentation of aerial images. We design an algorithm that reduces the domain shift impact using Generative Adversarial Networks (GANs). In the experiments, we test the proposed methodology on the International Society for Photogrammetry and Remote Sensing (ISPRS) semantic segmentation dataset and found that our method improves the overall accuracy from 35% to 52% when passing from Potsdam domain (considered as source domain) to Vaihingen domain (considered as target domain). In addition, the method allows to recover efficiently the inverted classes due to sensor variation. In particular, it improves the average segmentation accuracy of the inverted classes due to sensor variation from 14% to 61%.
In this paper, we consider the use of a team of multiple unmanned aerial vehicles (UAVs) to accomplish a search and rescue (SAR) mission in the minimum time possible while saving the maximum number of people. A novel technique for the SAR problem is proposed and referred to as the layered search and rescue (LSAR) algorithm. The novelty of LSAR involves simulating real disasters to distribute SAR tasks among UAVs. The performance of LSAR is compared, in terms of percentage of rescued survivors and rescue and execution times, with the max-sum, auction-based, and locust-inspired approaches for multi UAV task allocation (LIAM) and opportunistic task allocation (OTA) schemes. The simulation results show that the UAVs running the LSAR algorithm on average rescue approximately 74% of the survivors, which is 8% higher than the next best algorithm (LIAM). Moreover, this percentage increases with the number of UAVs, almost linearly with the least slope, which means more scalability and coverage is obtained in comparison to other algorithms. In addition, the empirical cumulative distribution function of LSAR results shows that the percentages of rescued survivors clustered around the [78%-100%] range under an exponential curve, meaning most results are above 50%. In comparison, all the other algorithms have almost equal distributions of their percentage of rescued survivor results. Furthermore, because the LSAR algorithm focuses on the center of the disaster, it finds more survivors and rescues them faster than the other algorithms, with an average of 55%∼77%. Moreover, most registered times to rescue survivors by LSAR are bounded by a time of 04:50:02 with 95% confidence for a one-month mission time.INDEX TERMS Autonomous agents, drones, search and rescue, unmanned aerial vehicles.
scite is a Brooklyn-based startup that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.