IGARSS 2022 - 2022 IEEE International Geoscience and Remote Sensing Symposium 2022
DOI: 10.1109/igarss46834.2022.9883914
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial Attacks on Radar Target Recognition Based on Deep Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 7 publications
0
2
0
Order By: Relevance
“…It generates adversarial examples by modifying the image according to the gradient through the backwards propagation of the loss function. Soon after, the FGSM was introduced into adversarial attacks on SAR image samples [32][33][34][35][36] by adding perturbations in the direction of the greatest gradient change within the network model to rapidly increase the loss function, ultimately leading to incorrect classification by the model. The original FGSM algorithm required only a single gradient update to produce adversarial examples; however, as a single-step attack with relatively high perturbation intensity that is only applicable to linear target functions, this approach results in a lower attack success rate.…”
Section: Global Perturbation Attackmentioning
confidence: 99%
See 1 more Smart Citation
“…It generates adversarial examples by modifying the image according to the gradient through the backwards propagation of the loss function. Soon after, the FGSM was introduced into adversarial attacks on SAR image samples [32][33][34][35][36] by adding perturbations in the direction of the greatest gradient change within the network model to rapidly increase the loss function, ultimately leading to incorrect classification by the model. The original FGSM algorithm required only a single gradient update to produce adversarial examples; however, as a single-step attack with relatively high perturbation intensity that is only applicable to linear target functions, this approach results in a lower attack success rate.…”
Section: Global Perturbation Attackmentioning
confidence: 99%
“…This method generates adversarial examples by performing multiple perturbations on the input samples along the sign direction of the gradient using projected gradient descent. The PGD method has been widely applied in adversarial attacks in SAR imagery [33,36,41], enhancing the attack effectiveness by increasing the number of iterations and incorporating a layer of randomization. As an alternative to the aforementioned FGSM method and its variants, Dong et al proposed a translation-invariant attack method (TIM) [42], in which the recognition areas of the attacked white-box model are less sensitive and the generated adversarial examples have better transferability.…”
Section: Global Perturbation Attackmentioning
confidence: 99%