2021
DOI: 10.1109/tgrs.2020.3004911
|View full text |Cite
|
Sign up to set email alerts
|

Multiscale CNN With Autoencoder Regularization Joint Contextual Attention Network for SAR Image Classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 30 publications
(10 citation statements)
references
References 50 publications
0
7
0
Order By: Relevance
“…These models showcase low complexity, but cannot be scaled for multiple numbers of classes. To enhance this performance, work in [11,12,13,14] proposes use of adaptive fuzzy learning (AFL), active ensemble deep learning (AEDL), autoencoder regularization joint contextual attention network (ARJCAN), which assists in improving classification performance for multiple datasets and scenarios. Similar models are discussed in [ 15,16,17], which propose use of Spatial & Semantic Features, Novel Attention Fully Convolutional Network Method (NAFCNN), which allow the model to augment multiple feature sets for enhancing classification performance.…”
Section: Literature Reviewmentioning
confidence: 99%
See 1 more Smart Citation
“…These models showcase low complexity, but cannot be scaled for multiple numbers of classes. To enhance this performance, work in [11,12,13,14] proposes use of adaptive fuzzy learning (AFL), active ensemble deep learning (AEDL), autoencoder regularization joint contextual attention network (ARJCAN), which assists in improving classification performance for multiple datasets and scenarios. Similar models are discussed in [ 15,16,17], which propose use of Spatial & Semantic Features, Novel Attention Fully Convolutional Network Method (NAFCNN), which allow the model to augment multiple feature sets for enhancing classification performance.…”
Section: Literature Reviewmentioning
confidence: 99%
“…5. Some identification techniques are vulnerable to adversarial attacks, where a malicious actor can manipulate the input image in a way that the identification technique misclassifies it [13,15].…”
Section: Many Identification Techniques Are Dependentmentioning
confidence: 99%
“…There are also two other multidimensional and multi-head attention mechanisms [21]. Multi-head attention processes the inputs linearly in multiple subsets, and finally merges them to compute the final attention weights [58], and is especially useful when employing the attention mechanism in conjunction with CNN methods [59][60][61]. Multidimensional attention, which is mostly employed for natural language processing, computes weights based on matrix representation of the features instead of vectors [62,63].…”
Section: Attention Mechanismsmentioning
confidence: 99%
“…In addition, it was observed that most of the RNN methods were used in combination with CNN methods [76,78,124]. Generative adversarial networks (GAN) [53,125,126], Graph Neural Network (GNN) [80,82], and other DL methods including capsule network [72] and autoencoders [61] were the other DL algorithms used in 12, 5, and 4 papers, respectively. Figure 9 shows the number of papers that employed the attention mechanism for each DL algorithm.…”
Section: Overview Of the Reviewed Papersmentioning
confidence: 99%
“…Ley et al [32] employed a Generative Adversarial Network (GAN) transcoding to transcode SAR images into the optical images, then the output layers of FCN were replaced with a classifier. Wu et al [33] designed a multi-scale convolutional neural network for pixel-wise classification, which is following an encoder-decoder architecture, and this network utilized the auto-encoder regularization branch and the contextual attention branch for learning classification information efficiently. Fang et al [34] designed a siamese U-net with sharing weights and a Fast Fourier Transform (FFT) correlation layer for SARoptical matching, and notably, the global context and local details of SAR and optical images were well retained.…”
mentioning
confidence: 99%