2020
DOI: 10.1109/access.2020.3012712
|View full text |Cite
|
Sign up to set email alerts
|

Few-Shot Modulation Classification Method Based on Feature Dimension Reduction and Pseudo-Label Training

Abstract: In modulation classification domain, handcrafted feature based method can fit well from a few labeled samples, while deep learning based method require a large amount of samples to achieve the superior classification performance. In order to improve the modulation classification accuracy under the constraint of limited labeled samples, this paper proposes a few-shot modulation classification method based on feature dimension reduction and pseudo-label training (FDRPLT), which combines handcrafted feature based… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
6
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(6 citation statements)
references
References 32 publications
0
6
0
Order By: Relevance
“…More importantly, the raw IQ input was not considered in their hybrid structure which may lead to severe performance loss. The authors in [51] focused on the semisupervised learning scenario, where some handcrafted features, such as high-order cumulants features, entropy features and time-frequency features were combined with unsupervised features extracted by autoencoder as well as the labeled samples to train an annotator to label the unlabeled samples. As a result, adequate pseudo-labeled samples and a few real-labeled samples were applied to train a classifier.…”
Section: Hybrid Amc Methodsmentioning
confidence: 99%
“…More importantly, the raw IQ input was not considered in their hybrid structure which may lead to severe performance loss. The authors in [51] focused on the semisupervised learning scenario, where some handcrafted features, such as high-order cumulants features, entropy features and time-frequency features were combined with unsupervised features extracted by autoencoder as well as the labeled samples to train an annotator to label the unlabeled samples. As a result, adequate pseudo-labeled samples and a few real-labeled samples were applied to train a classifier.…”
Section: Hybrid Amc Methodsmentioning
confidence: 99%
“…Different deep RESNET models differ in convolution layers including ResNet-18, ResNet-34, ResNet-50, ResNet-101 and ResNet-152. RESNET can be used to train the deep convolution neural network, and the network framework is relatively simple [9].…”
Section: Proposed Methodsmentioning
confidence: 99%
“…In the cases where unlabeled data are available, pseudolabeling can be adopted to label such data, as shown in [50]. Before pseudo-labeling, the feature set consisting of 10 handcrafted features and 30 AutoEncoder (AE)-learned features is optimized to remove redundant and irrelevant features by using a fast correlation-based filter.…”
Section: E Data Augmentationmentioning
confidence: 99%
“…The optimal feature set is fed to a pseudo-label annotator, which will label an unlabeled sample if its first Softmax probability is higher than the sum of the second and third Softmax probabilities. However, [50] assumes 100 labeled samples are available for each class at each SNR. Moreover, the applied policy for pseudo-labeling cannot guarantee that the selected label is correct, especially when applied to signals with unseen transmitter and channel parameters and hardware imperfections.…”
Section: E Data Augmentationmentioning
confidence: 99%