2022
DOI: 10.1109/lcomm.2021.3135688
|View full text |Cite
|
Sign up to set email alerts
|

AMCRN: Few-Shot Learning for Automatic Modulation Classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
17
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 41 publications
(17 citation statements)
references
References 11 publications
0
17
0
Order By: Relevance
“…There are two main reasons for this: on the one hand, some of the modulated signals are extremely similar in themselves and become more difficult to identify under the influence of noise, and on the other hand, when interference from noise is received between the signals, the difference between their characteristics decreases, leading to mutual misjudgment. As shown in Figure 4, we analyse the recognition of convolution kernels as (1,5), (1,6), (1,7), and (1, 8) under the RLADNN model. As the convolutional kernel gets smaller, its recognition rate as a whole shows a gradually decreasing trend, and the smaller the convolutional kernel, the smaller its perceptual field, the less relevant it is to the feature as a whole, and when a wash feature is encountered, it may not be able to represent its features when using a relatively small convolutional kernel.…”
Section: Rladnn Modelmentioning
confidence: 99%
See 2 more Smart Citations
“…There are two main reasons for this: on the one hand, some of the modulated signals are extremely similar in themselves and become more difficult to identify under the influence of noise, and on the other hand, when interference from noise is received between the signals, the difference between their characteristics decreases, leading to mutual misjudgment. As shown in Figure 4, we analyse the recognition of convolution kernels as (1,5), (1,6), (1,7), and (1, 8) under the RLADNN model. As the convolutional kernel gets smaller, its recognition rate as a whole shows a gradually decreasing trend, and the smaller the convolutional kernel, the smaller its perceptual field, the less relevant it is to the feature as a whole, and when a wash feature is encountered, it may not be able to represent its features when using a relatively small convolutional kernel.…”
Section: Rladnn Modelmentioning
confidence: 99%
“…Zhou et al. [7] proposed an automatic modulation classification relation network (AMCRN) structure to improve the performance with few samples, which uses a one‐dimensional convolution to effectively reduce the dimensionality, reduce redundancy, and remove some noise. The global average pooling (GAP) method used by Hao et al.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…For example, a DL-AMR model can be trained on offline datasets such as RML2016.10a, and fine-tuned using online data obtained in real-world by adopting the transfer learning strategy. If no training data is available only a few training samples are available for some signal classes, then zero-shot [11] and few-shot [94,37,91] learning are possible solutions for signal recognition. To improve the robustness of a model, methods that can withstand adversarial attacks need to be further investigated [58].…”
Section: Designing Novel Dl-amr Modelsmentioning
confidence: 99%
“…The authors in [7], proposed a new FSL framework, Attention Relations Network (ARN), to utilize channel and spatial attention, thereby effectively extracting features from a support set. In addition, a novel network architecture called AMCRN, an AMC algorithm using a relational network, is proposed in [8]. The architecture reached a maximum classification accuracy of 93% and showed a performance improvement of 10 to 50% compared to the existing baseline schemes.…”
Section: Introductionmentioning
confidence: 99%