2021
DOI: 10.1155/2021/6663028
|View full text |Cite
|
Sign up to set email alerts
|

Project Gradient Descent Adversarial Attack against Multisource Remote Sensing Image Scene Classification

Abstract: Deep learning technology (a deeper and optimized network structure) and remote sensing imaging (i.e., the more multisource and the more multicategory remote sensing data) have developed rapidly. Although the deep convolutional neural network (CNN) has achieved state-of-the-art performance on remote sensing image (RSI) scene classification, the existence of adversarial attacks poses a potential security threat to the RSI scene classification task based on CNN. The corresponding adversarial samples can be genera… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
1
1
1

Relationship

0
10

Authors

Journals

citations
Cited by 18 publications
(7 citation statements)
references
References 34 publications
0
5
0
Order By: Relevance
“…PGD is one of the most popular and powerful attacks, which is called gradient-based attacks [40], [41]. It is used to compute the gradient of the loss function with respect to the input, x, and then the attacker creates the adversarial example by adding the sign of the gradient to the input data.…”
Section: ) Pgdmentioning
confidence: 99%
“…PGD is one of the most popular and powerful attacks, which is called gradient-based attacks [40], [41]. It is used to compute the gradient of the loss function with respect to the input, x, and then the attacker creates the adversarial example by adding the sign of the gradient to the input data.…”
Section: ) Pgdmentioning
confidence: 99%
“…However, it has a different method to generate adversarial examples. It initials the search for the adversarial example at random points in a suitable region, then runs several iterations to find an adversarial example with the greatest loss, but the size of the perturbation is smaller than a specified amount referred to as epsilon, [20]. PGD can generate stronger attacks than FGSM and BIM.…”
Section: Adversarial Machine Learningmentioning
confidence: 99%
“…The BIM attack has been used in [32] to attack channel estiomation prediction models. 3) PGD: PGD is one of the most popular attacks, which is called gradient-based attacks [33]. It is called the most powerful attack.…”
Section: Adversarial Attacksmentioning
confidence: 99%