2021
DOI: 10.3390/rs13214358
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial Attack for SAR Target Recognition Based on UNet-Generative Adversarial Network

Abstract: Some recent articles have revealed that synthetic aperture radar automatic target recognition (SAR-ATR) models based on deep learning are vulnerable to the attacks of adversarial examples and cause security problems. The adversarial attack can make a deep convolutional neural network (CNN)-based SAR-ATR system output the intended wrong label predictions by adding small adversarial perturbations to the SAR images. The existing optimization-based adversarial attack methods generate adversarial examples by minimi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
5
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 23 publications
(6 citation statements)
references
References 34 publications
(52 reference statements)
0
5
0
Order By: Relevance
“…Even though there are numerous improved networks [33], [34] based on U-Net, but the U-Net model has its unique advantages in SAR attacks and it is widely used in mainstream SAR adversarial attack algorithms [18], [35], [36]. The reasons for selecting U-Net as the network architecture can be outlined as follows:…”
Section: A Network Structure Of the Generator And Attenuatormentioning
confidence: 99%
“…Even though there are numerous improved networks [33], [34] based on U-Net, but the U-Net model has its unique advantages in SAR attacks and it is widely used in mainstream SAR adversarial attack algorithms [18], [35], [36]. The reasons for selecting U-Net as the network architecture can be outlined as follows:…”
Section: A Network Structure Of the Generator And Attenuatormentioning
confidence: 99%
“…The study [239] The proposed approach aims to differentiate the target distribution by utilizing a feature dictionary model, without any prior knowledge of the classifier. Finally, we summarize adversarial attacks against image classification in RS ( [195], [234]- [248]) in Table IX.…”
Section: ) Image Classificationmentioning
confidence: 99%
“…Specifically, simpler network structures result in more robustness against adversarial attacks. Further, Du et al introduced generative models to accelerate the attack process and to refine the scattering features [32,33]. Most recently, Liu et al illustrated the physical feasibility of the digital adversarial examples by exploiting the phase modulation jamming metasurface [34][35][36].…”
Section: Introductionmentioning
confidence: 99%