2022
DOI: 10.1109/lgrs.2022.3183467
|View full text |Cite
|
Sign up to set email alerts
|

Global in Local: A Convolutional Transformer for SAR ATR FSL

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
11
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 24 publications
(11 citation statements)
references
References 11 publications
0
11
0
Order By: Relevance
“…In addition, several recently proposed self-supervised learning approaches that achieved state-of-the-art results in SAR ATR were selected to be used in comparison experiments. These included three contrastive self-supervised learning models, i.e., the PL method [24], the CDA method [27], and the ConvT method [30]. In addition, four self-supervised learning methods were used, i.e., the TSDF-N method [9], the ICSGF method [10], the SFAS method [42], and the DKTS-N method [43].…”
Section: Comparison With Reference Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…In addition, several recently proposed self-supervised learning approaches that achieved state-of-the-art results in SAR ATR were selected to be used in comparison experiments. These included three contrastive self-supervised learning models, i.e., the PL method [24], the CDA method [27], and the ConvT method [30]. In addition, four self-supervised learning methods were used, i.e., the TSDF-N method [9], the ICSGF method [10], the SFAS method [42], and the DKTS-N method [43].…”
Section: Comparison With Reference Methodsmentioning
confidence: 99%
“…To increase the classification accuracy of SAR ATR for ships, Xu et al modified the SimSiam framework, which is a classic CSL framework, and developed a new positive pair sampling method that considered polarization information [29]. Wang et al proposed a mixture loss method consisting of contrastive loss and label propagation to investigate the global and local representations in SAR images [30]. Ren et al proposed a Siamese feature embedding network and leveraged the CSL approach to train a low-dimensional feature space for feature extraction in SAR ATR [31].…”
Section: Related Workmentioning
confidence: 99%
“…The attention weight is obtained by computing cosine similarity in Euclidean space. Wang et al [33] developed a method combining CNN and transformer, which makes full use of the local perception capability of CNN and the global modeling capability of the transformer. Li et al [34] constructed a multi-aspect SAR sequence dataset from the MSTAR data.…”
Section: Transformer In Target Recognitionmentioning
confidence: 99%
“…In recent decades, deep learning has been applied in signal and image processing fields and demonstrated its superior performance. As for the SAR ATR application, many excellent studies have proposed many deep learning methods with outstanding results [14][15][16][17][18][19][20][21][22][23][24][25]. Chen et al [26] proposed an all-convolutional network replacing all the dense layers with the convolutional layers, which leads to outstanding recognition performance.…”
Section: Introductionmentioning
confidence: 99%