Our system is currently under heavy load due to increased usage. We're actively working on upgrades to improve performance. Thank you for your patience.
2020 IEEE International Conference on Informatics, IoT, and Enabling Technologies (ICIoT) 2020
DOI: 10.1109/iciot48696.2020.9089657
|View full text |Cite
|
Sign up to set email alerts
|

Deep Learning for RF-Based Drone Detection and Identification: A Multi-Channel 1-D Convolutional Neural Networks Approach

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
26
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 60 publications
(33 citation statements)
references
References 20 publications
0
26
0
Order By: Relevance
“…The proposed method could be extended by other researches on UAV detection and classification performance improvement, including more effective feature extraction as well as novel classification models focusing on finer frequency details. [21] 46.8% 43.0% Classification method in [22] 59.2% 55.1% Classification method in [23] 87.4% /…”
Section: Discussionmentioning
confidence: 99%
“…The proposed method could be extended by other researches on UAV detection and classification performance improvement, including more effective feature extraction as well as novel classification models focusing on finer frequency details. [21] 46.8% 43.0% Classification method in [22] 59.2% 55.1% Classification method in [23] 87.4% /…”
Section: Discussionmentioning
confidence: 99%
“…The proposed model derives an accuracy of 59.2% and an F1 score of 55.1% for the ten-class classification. The multi-channel 1D CNN in [23] includes a feature extractor and a classical MLP.…”
Section: E Comparisonmentioning
confidence: 99%
“…The effectiveness of the method is verified based on a practical dataset in [21]. The experiment results show that the proposed approach achieves an accuracy of 98.4% and an F1 score of 98.3%, and outperforms other state-of-the-art methods [21] [22] [23].…”
Section: Introductionmentioning
confidence: 99%
“…• T. Li, Z. Hong, Q. Cai, L.Yu, Z. Wen [25], [26]), or through camera-based target tracking from video streaming [27], [28] or statistically monitoring network traffic data [29], [30]. Acoustic-based approaches are typically sensitive to environmental noises whilst the visual quality of camera is subject to the surrounding conditions such as building blockage, ambient lighting, etc.…”
Section: Introductionmentioning
confidence: 99%