2023
DOI: 10.3390/rs15102699
|View full text |Cite
|
Sign up to set email alerts
|

Boosting Adversarial Transferability with Shallow-Feature Attack on SAR Images

Abstract: Adversarial example generation on Synthetic Aperture Radar (SAR) images is an important research area that could have significant impacts on security and environmental monitoring. However, most current adversarial attack methods on SAR images are designed for white-box situations by end-to-end means, which are often difficult to achieve in real-world situations. This article proposes a novel black-box targeted attack method, called Shallow-Feature Attack (SFA). Specifically, SFA assumes that the shallow featur… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(1 citation statement)
references
References 54 publications
0
1
0
Order By: Relevance
“…On the contrary, in the black-box attack scenario, > REPLACE THIS LINE WITH YOUR MANUSCRIPT ID NUMBER (DOUBLE-CLICK HERE TO EDIT) < it is difficult for the attacker to obtain the information of the victim model. In general, black-box attacks can be divided into probabilistic label-based attacks [24]- [26] decision-based attacks [27], and transferred attacks [28], [29]. Among the above three black-box attacks, the first two black-box attacks usually require a large number of queries to the neural network.…”
mentioning
confidence: 99%
“…On the contrary, in the black-box attack scenario, > REPLACE THIS LINE WITH YOUR MANUSCRIPT ID NUMBER (DOUBLE-CLICK HERE TO EDIT) < it is difficult for the attacker to obtain the information of the victim model. In general, black-box attacks can be divided into probabilistic label-based attacks [24]- [26] decision-based attacks [27], and transferred attacks [28], [29]. Among the above three black-box attacks, the first two black-box attacks usually require a large number of queries to the neural network.…”
mentioning
confidence: 99%