2022
DOI: 10.22541/au.165633740.01163731/v1
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Local Aggregative Attack on SAR Image Classification Models

Abstract: Convolutional neural networks (CNN) have been widely used in the field of synthetic aperture radar (SAR) image classification for their high classification accuracy. However, because CNNs learn a fairly discontinuous input-output mapping, they are vulnerable to adversarial examples. Unlike most existing attack manners that fool CNN models with complex global perturbations, this study provides an idea for generating more dexterous adversarial perturbations. It demonstrates that minor local perturbations are als… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 13 publications
0
1
0
Order By: Relevance
“…On the contrary, in the black-box attack scenario, > REPLACE THIS LINE WITH YOUR MANUSCRIPT ID NUMBER (DOUBLE-CLICK HERE TO EDIT) < it is difficult for the attacker to obtain the information of the victim model. In general, black-box attacks can be divided into probabilistic label-based attacks [24]- [26] decision-based attacks [27], and transferred attacks [28], [29]. Among the above three black-box attacks, the first two black-box attacks usually require a large number of queries to the neural network.…”
mentioning
confidence: 99%
“…On the contrary, in the black-box attack scenario, > REPLACE THIS LINE WITH YOUR MANUSCRIPT ID NUMBER (DOUBLE-CLICK HERE TO EDIT) < it is difficult for the attacker to obtain the information of the victim model. In general, black-box attacks can be divided into probabilistic label-based attacks [24]- [26] decision-based attacks [27], and transferred attacks [28], [29]. Among the above three black-box attacks, the first two black-box attacks usually require a large number of queries to the neural network.…”
mentioning
confidence: 99%