2015 IEEE 5th Asia-Pacific Conference on Synthetic Aperture Radar (APSAR) 2015
DOI: 10.1109/apsar.2015.7306296
|View full text |Cite
|
Sign up to set email alerts
|

SAR ATR based on dividing CNN into CAE and SNN

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
20
0

Year Published

2017
2017
2019
2019

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 29 publications
(20 citation statements)
references
References 4 publications
0
20
0
Order By: Relevance
“…Naturally, it is vital to design a network structure that is capable of making full use of the limited available data (such as the Moving and Stationary Target Acquisition and Recognition (MSTAR) dataset). Recent studies regarding ATR began to adopt convolutional neural networks (CNNs); this approach provides a powerful tool for ATR in SAR, with significant progress in the past years [14][15][16]. However, CNNs remains unable to overcome the aforementioned challenges, partly because of their high dependency on large data for training an excellent model.…”
Section: Introductionmentioning
confidence: 99%
“…Naturally, it is vital to design a network structure that is capable of making full use of the limited available data (such as the Moving and Stationary Target Acquisition and Recognition (MSTAR) dataset). Recent studies regarding ATR began to adopt convolutional neural networks (CNNs); this approach provides a powerful tool for ATR in SAR, with significant progress in the past years [14][15][16]. However, CNNs remains unable to overcome the aforementioned challenges, partly because of their high dependency on large data for training an excellent model.…”
Section: Introductionmentioning
confidence: 99%
“…Due to limited training samples, the SAR ATR networks are usually more shallower than the natural image recognition case. As for the type identification, both shallow CAE and CNN are employed to conduct MSTAR and TerraSAR-X data recognition with sparsely connected convolution architectures and fine-tuning strategies, and achieve high recognition accuracy over 98% [17,21,22]. Lately, various handcrafted features combined with deep learning for precise recognition, like multi-aspect scattering feature, texture feature and so on.…”
Section: Introductionmentioning
confidence: 99%
“…The experiment of this method on MSTAR database obtains a target recognition accuracy of 90.1% for 3 classes and 84.7% for 10 classes. Similarly, Li et al uses the autoencoder machine to initialize the DCNN [11]. The difference is that fully connected layers are used as SNN to work as a final classifier in [11], which greatly reduces training time of DCNN on the premise of ensuring accuracy.…”
Section: I! Introductionmentioning
confidence: 99%
“…Similarly, Li et al uses the autoencoder machine to initialize the DCNN [11]. The difference is that fully connected layers are used as SNN to work as a final classifier in [11], which greatly reduces training time of DCNN on the premise of ensuring accuracy. Although the accuracy of two methods is not very high, there is less dependence on experience of experts in the process of feature learning.…”
Section: I! Introductionmentioning
confidence: 99%
See 1 more Smart Citation