“…The ratio is 6:2:2. In this letter, the publicly available dataset RadioML2016.10a from [3] is used to evaluate the DL models, we chose the following typical DL-based AMR algorithms as benchmarks, which contain various types of mainstream frameworks for DL and each has its own advantages (CNN-LSTM [9], Complex-CNN [15], improved convolutional neural network (CNN)-based automatic modulation classification network (IC-AMCNet) [16], Convolutional Long-Short Term Deep Neural Network (CLDNN) [17], Convolutional Neural Networks and Long-Short-Term Memory Networks (RCTLNet) [18], and CNN-GRU hybrid network model (CGRNet) [19]). As shown in Figure 2, even compared with the current state-of-the-art model, the recognition rate of the model proposed in this paper has improved significantly over the entire SNR range, and when Attention is added, it results in a more substantial performance improvement at [−6:18] SNR based on the original model, but with only a small increase in time and number of parameters.…”