MILCOM 2019 - 2019 IEEE Military Communications Conference (MILCOM) 2019
DOI: 10.1109/milcom47813.2019.9020949
|View full text |Cite
|
Sign up to set email alerts
|

Developing RFML Intuition: An Automatic Modulation Classification Architecture Case Study

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
3

Relationship

2
6

Authors

Journals

citations
Cited by 11 publications
(15 citation statements)
references
References 8 publications
0
15
0
Order By: Relevance
“…The limited body of works include an approach for end-to-end communications presented in [33], as well as several that have explored multitask learning as a method to both improve the explainability and accuracy of models trained to perform automatic modulation classification (AMC). More specifically, in both [34,35], modulation classes are broken into subgroups, either by modulation type (i.e., linear, frequency, etc.) or in order to separate the modulation schemes that cause the most confusion (i.e., 16QAM and 64QAM); moreover, in [36], concept bottleneck models were used to provide inherent decision explanations while performing AMC via the prediction of a set of intermediate concepts defined prior to training.…”
Section: Multitask Learningmentioning
confidence: 99%
“…The limited body of works include an approach for end-to-end communications presented in [33], as well as several that have explored multitask learning as a method to both improve the explainability and accuracy of models trained to perform automatic modulation classification (AMC). More specifically, in both [34,35], modulation classes are broken into subgroups, either by modulation type (i.e., linear, frequency, etc.) or in order to separate the modulation schemes that cause the most confusion (i.e., 16QAM and 64QAM); moreover, in [36], concept bottleneck models were used to provide inherent decision explanations while performing AMC via the prediction of a set of intermediate concepts defined prior to training.…”
Section: Multitask Learningmentioning
confidence: 99%
“…The first group of approaches provide intrinsic interpretability by using inherently more interpretable models either from the offset or extracted from a black box model [141]. Examples of such models include decision trees [25], [142], attention mechanisms [143], clustering algorithms, or linear/Bayesian classifiers [144]. While these methods are typically the most straightforward and provide the most useful model/decision explanations, inherently interpretable models are typically less expressive than black-box models such as deep NNs, and therefore do not provide the same level of performance.…”
Section: Interpretation/explanationmentioning
confidence: 99%
“…RFML based approaches have aimed to replace the human intelligence and domain expertise required to identify and characterize these features using deep neural networks and advanced architectures, such as CNNs and Recurrent Neural Networks (RNNs), to both blindly and automatically identify separating features and classify signals of interest, with minimal pre-processing and less a priori knowledge [48], [52], [56], [57], [82]. Given the significant research in RFML-based modulation classification, it can be argued that AMC is one of the most mature fields in RFML, and has been deployed in real-world products [122].…”
Section: A Spectrum Sensingmentioning
confidence: 99%