2018
DOI: 10.3390/rs10060819
|View full text |Cite
|
Sign up to set email alerts
|

Hierarchical Fusion of Convolutional Neural Networks and Attributed Scattering Centers with Application to Robust SAR ATR

Abstract: This paper proposes a synthetic aperture radar (SAR) automatic target recognition (ATR) method via hierarchical fusion of two classification schemes, i.e., convolutional neural networks (CNN) and attributed scattering center (ASC) matching. CNN can work with notably high effectiveness under the standard operating condition (SOC). However, it can hardly cope with various extended operating conditions (EOCs), which are not covered by the training samples. In contrast, the ASC matching can handle many EOCs relate… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
30
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 31 publications
(30 citation statements)
references
References 46 publications
0
30
0
Order By: Relevance
“…The accuracy of MdpCaps-Csl can be improved by 11.79%, 4.79%, 6.19%, 3.19%, 0.99%, 7.09%, 2.79% and 7.19%, respectively, compared with EMACH (extended maximum average correlation height) + PDCCF (polynomial distance classifier correlation filter) [58], IGT (iterative graph thickening) [59], SRC [22], MSS (monogenic scale space) [60], MPMC (modified polar mapping classifier) [61], AdaBoost [19], CGM (conditionally gaussian model) [62], and BCS (bayesian compressive sensing) + scattering centers [63]. Also, MdpCaps-Csl is slightly higher in accuracy than other deep learning-based methods e.g., CNN [28], Com-plexNet [64], A-ConvNet [29], CNN + SVM [65], DCHUN [56], CNN-TL-bypass [66], CNN + ASC [30], LCNN + Visual Attention [32] and APCRLNet [57], etc. The above experiments show that MdpCaps-Csl can perform well in recognition without data enhancement.…”
Section: Experiments On All Training Samplesmentioning
confidence: 99%
See 2 more Smart Citations
“…The accuracy of MdpCaps-Csl can be improved by 11.79%, 4.79%, 6.19%, 3.19%, 0.99%, 7.09%, 2.79% and 7.19%, respectively, compared with EMACH (extended maximum average correlation height) + PDCCF (polynomial distance classifier correlation filter) [58], IGT (iterative graph thickening) [59], SRC [22], MSS (monogenic scale space) [60], MPMC (modified polar mapping classifier) [61], AdaBoost [19], CGM (conditionally gaussian model) [62], and BCS (bayesian compressive sensing) + scattering centers [63]. Also, MdpCaps-Csl is slightly higher in accuracy than other deep learning-based methods e.g., CNN [28], Com-plexNet [64], A-ConvNet [29], CNN + SVM [65], DCHUN [56], CNN-TL-bypass [66], CNN + ASC [30], LCNN + Visual Attention [32] and APCRLNet [57], etc. The above experiments show that MdpCaps-Csl can perform well in recognition without data enhancement.…”
Section: Experiments On All Training Samplesmentioning
confidence: 99%
“…In addition, the accuracy of CNN-TL-bypass [66] is 97.15% using a total of 500 training samples that are randomly selected from 50 samples in 10 types of training data. With only 10% of the training data (only 275 images are selected as [30] and semi-supervised transfer learning model [68] are 87% and 91.36%, respectively, when using 20% of the training data. The accuracy of MdpCaps-Csl under the same condition is 98.80%, far higher than the above two methods.…”
Section: E Experiments On Partial Training Samplesmentioning
confidence: 99%
See 1 more Smart Citation
“…The unsupervised DL models comprise the multi-discriminator generative adversarial network (MGAN-CNN) that generates unlabeled images with GAN and sets them as the input of CNN together with original labeled images [61], the feature fusion SAE (FFAE) [15] that extracts 23 baseline features and three-patch local binary pattern (TPLBP) features and, subsequently, feeds them into an SAE for feature fusion and the variational AE based on residual network (ResVAE) [22]. The supervised models for performance evaluation are the ED-AE [30], the Triplet-DAE [31], the CNN with SVM [37], the A-Convnet [38], the ESENet that based on a new enhanced squeeze and excitation (enhanced-SE) module [35], and the hierarchical fusion of CNN and ASC (ASC-CNN) that provide a complicated scheme to fuse the decision of the ASC model and the CNN [39]. Among these methods, the CNN with SVM and the A-Convnet are implemented in our codes with Python.…”
Section: Evaluation On Ten-target Classificationmentioning
confidence: 99%
“…Although these AE-based models have developed an effective way to learn the robust representation via an unlabeled SAR dataset and achieved competitive results, the performance of most of these models is still slightly inferior to their supervised counterparts [35][36][37][38][39] and some handcrafted features [4,5,7] that are based on the electromagnetic scattering models. The major reasons include the following:…”
Section: Introductionmentioning
confidence: 99%