2016 International Conference on Unmanned Aircraft Systems (ICUAS) 2016
DOI: 10.1109/icuas.2016.7502521
|View full text |Cite
|
Sign up to set email alerts
|

An intruder detection algorithm for vision based sense and avoid system

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
15
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
9

Relationship

2
7

Authors

Journals

citations
Cited by 27 publications
(15 citation statements)
references
References 8 publications
0
15
0
Order By: Relevance
“…The vision-based methods are almost the lowest cost and the most easily configured one. [1][2][3][4][5][6][7] Zhang et al 8 proposed an intruder detection algorithm based on learning deep features using video images. The sliding window technique is used to obtain the test samples.…”
Section: Vision-based Methodsmentioning
confidence: 99%
“…The vision-based methods are almost the lowest cost and the most easily configured one. [1][2][3][4][5][6][7] Zhang et al 8 proposed an intruder detection algorithm based on learning deep features using video images. The sliding window technique is used to obtain the test samples.…”
Section: Vision-based Methodsmentioning
confidence: 99%
“…In the C-RPAS structure, there are electrooptical (EO) and infrared systems for video detection and RPAS recognition. A drone can be detected based on specific features such as color, contour lines, geometric shapes [6] or edges and other movement characteristics [7]. There are drones that simulate the flight of birds (Ornithoptera-According to DEX (explanatory dictionary), Ornithopter -Aircraft heavier than air, with flying wings, which mimics the flight of birds).…”
Section: The Influence Of Fingerprint Mitigation Devices On Visible and Infrared Spectrum From Rpas Structure On Detectionmentioning
confidence: 99%
“…Node term is measured by (4), where edge term is measured by (5). Where ω = [ω 1 , ω 2 ] is the CRF weights, and energy function E(C, Y , ω) can therefore be formulated as (6).…”
Section: ) the Layered Structure For Extracting Spatial Contextmentioning
confidence: 99%