2022
DOI: 10.1109/lgrs.2022.3184311
|View full text |Cite
|
Sign up to set email alerts
|

Speckle-Variant Attack: Toward Transferable Adversarial Attack to SAR Target Recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
9
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
4

Relationship

1
8

Authors

Journals

citations
Cited by 32 publications
(9 citation statements)
references
References 29 publications
0
9
0
Order By: Relevance
“…Chen et al [36] proposed conducting adversarial attacks on the attention maps of input images, which can achieve better results. Peng et al [37] proposed the SVA attack, which consists of two major modules, an iterative gradient-based perturbation generator and a target region extractor, that can generate more transferable adversarial examples. Deepfool [38] is a transfer-based adversarial attack algorithm that aims to generate minimal perturbations to an input sample in order to mislead a neural network model.…”
Section: Black-box Attack Methodsmentioning
confidence: 99%
“…Chen et al [36] proposed conducting adversarial attacks on the attention maps of input images, which can achieve better results. Peng et al [37] proposed the SVA attack, which consists of two major modules, an iterative gradient-based perturbation generator and a target region extractor, that can generate more transferable adversarial examples. Deepfool [38] is a transfer-based adversarial attack algorithm that aims to generate minimal perturbations to an input sample in order to mislead a neural network model.…”
Section: Black-box Attack Methodsmentioning
confidence: 99%
“…Moreover, causal features (i.e., target signatures) are unstable under different imaging conditions. Current adversarial attack studies have shown that deep learning is affected by small shifts in target signatures [26], [27]. A small dataset such as MSTAR can not adequately reflect target and background variations across imaging conditions, even with reduced data bias.…”
Section: Explaining the Non-causalitymentioning
confidence: 99%
“…Most recently, Liu et al illustrated the physical feasibility of the digital adversarial examples by exploiting the phase modulation jamming metasurface [34][35][36]. Meanwhile, Peng et al proposed that regional restricted design also potential to be implemented in real-world [37]. It is worth noting that all the current studies comply with a default setting with which the attacker can access the same training data of the victim model.…”
Section: Introductionmentioning
confidence: 99%