Usage of Unmanned Aerial Vehicles (UAVs) is growing rapidly in a wide range of consumer applications, as they prove to be both autonomous and flexible in a variety of environments and tasks. However, this versatility and ease of use also brings a rapid evolution of threats by malicious actors that can use UAVs for criminal activities, converting them to passive or active threats. The need to protect critical infrastructures and important events from such threats has brought advances in counter UAV (c-UAV) applications. Nowadays, c-UAV applications offer systems that comprise a multi-sensory arsenal often including electro-optical, thermal, acoustic, radar and radio frequency sensors, whose information can be fused to increase the confidence of threat’s identification. Nevertheless, real-time surveillance is a cumbersome process, but it is absolutely essential to detect promptly the occurrence of adverse events or conditions. To that end, many challenging tasks arise such as object detection, classification, multi-object tracking and multi-sensor information fusion. In recent years, researchers have utilized deep learning based methodologies to tackle these tasks for generic objects and made noteworthy progress, yet applying deep learning for UAV detection and classification is considered a novel concept. Therefore, the need to present a complete overview of deep learning technologies applied to c-UAV related tasks on multi-sensor data has emerged. The aim of this paper is to describe deep learning advances on c-UAV related tasks when applied to data originating from many different sensors as well as multi-sensor information fusion. This survey may help in making recommendations and improvements of c-UAV applications for the future.
R ecent film releases such as Avatar have revolutionized cinema by combining 3D technology and content production and real actors, leading to the creation of a new genre at the outset of the 2010s. The success of 3D cinema has led several major consumer electronics manufacturers to launch 3D-capable televisions and broadcasters to offer 3D content. Today's 3DTV technology is based on stereo vision, which presents left-and right-eye images through temporal or spatial multiplexing to viewers wearing a pair of glasses. The next step in 3DTV development will likely be a multiview autostereoscopic imaging system, which will record and present many pairs of video signals on a display and will not require viewers to wear glasses. 1,2 Although researchers have proposed several autostereoscopic displays, the resolution and viewing position is still limited. Furthermore, stereo and multiview technologies rely on the brain to fuse the two disparate images to create the 3D effect. As a result, such systems tend to cause eye strain, fatigue, and headaches after prolonged viewing because users are required to focus on the screen plane (accommodation) but to converge their eyes to a point in space in a different plane (convergence), producing unnatural viewing. Recent advances in digital technology have eliminated some of these human factors, but some intrinsic eye fatigue will always exist with stereoscopic 3D technology. 3 These facts have motivated researchers to seek alternative means for capturing true 3D content, most notably holography and holoscopic imaging. Due to the interference of the coherent light fields required to record holograms, their use is still limited and mostly confined to research laboratories. Holoscopic imaging (also referred to as integral imaging) in its simplest form, on the other hand, consists of a lens array mated to a digital sensor with each lens capturing perspective views of the scene. 49 In this case, the light field does not need to be coherent, so holoscopic color images can be obtained with full parallax. This conveniently lets us adopt more conventional live capture and display procedures. Furthermore, 3D holoscopic imaging offers fatigue-free viewing to more than one person, independent of the viewers' positions.Due to recent advances in theory and microlens manufacturing, 3D holoscopic imaging is becoming a practical, prospective 3D display technology and is thus attracting much interest in the 3D area. The 3D Live Immerse VideoAudio Interactive Multimedia (3D Vivant, www.3dvivant.eu) project, funded by the EU-FP7 ICT-4-1.5Networked Media and 3D Internet, has proposed advances in 3D holoscopic imaging technology for the capture, representation, processing, and display of 3D holoscopic content that overcome most of the aforementioned restrictions faced by traditional 3D technologies. This article presents our work as part of the 3D Vivant project. 3D Holoscopic Content GenerationThe 3D holoscopic imaging technique creates and represents a true volume spatial optical model of the objec...
Small drones are a rising threat due to their possible misuse for illegal activities, in particular smuggling and terrorism. The project SafeShore, funded by the European Commission under the Horizon 2020 program, has launched the "drone-vs-bird detection challenge" to address one of the many technical issues arising in this context. The goal is to detect a drone appearing at some point in a video where birds may be also present: the algorithm should raise an alarm and provide a position estimate only when a drone is present, while not issuing alarms on birds. This paper reports on the challenge proposal, evaluation, and results 1 .
The popularity of Unmanned Aerial Vehicles (UAVs) is increasing year by year and reportedly their applications hold great shares in global technology market. Yet, since UAVs can be also used for illegal actions, this raises various security issues that needs to be encountered. Towards this end, UAV detection systems have emerged to detect and further anticipate inimical drones. A very significant factor is the maximum detection range in which the system's senses can "see" an upcoming UAV. For those systems that employ optical cameras for detecting UAVs, the main issue is the accurate drone detection when it fades away into sky. This work proposes the incorporation of Super-Resolution (SR) techniques in the detection pipeline, to increase its recall capabilities. A deep SR model is utilized prior to the UAV detector to enlarge the image by a factor of 2. Both models are trained in an end-to-end manner to fully exploit the joint optimization effects. Extensive experiments demonstrate the validity of the proposed method, where potential gains in the detector's recall performance can reach up to 32.4%.
Adopting effective techniques to automatically detect and identify small drones is a very compelling need for a number of different stakeholders in both the public and private sectors. This work presents three different original approaches that competed in a grand challenge on the “Drone vs. Bird” detection problem. The goal is to detect one or more drones appearing at some time point in video sequences where birds and other distractor objects may be also present, together with motion in background or foreground. Algorithms should raise an alarm and provide a position estimate only when a drone is present, while not issuing alarms on birds, nor being confused by the rest of the scene. In particular, three original approaches based on different deep learning strategies are proposed and compared on a real-world dataset provided by a consortium of universities and research centers, under the 2020 edition of the Drone vs. Bird Detection Challenge. Results show that there is a range in difficulty among different test sequences, depending on the size and the shape visibility of the drone in the sequence, while sequences recorded by a moving camera and very distant drones are the most challenging ones. The performance comparison reveals that the different approaches perform somewhat complementary, in terms of correct detection rate, false alarm rate, and average precision.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.