2023
DOI: 10.21203/rs.3.rs-2459893/v1
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A Survey on Physical Adversarial Attack in Computer Vision

Abstract: In the past decade, deep learning has dramatically changed the traditional hand-craft feature manner with strong feature learning capability, promoting the tremendous improvement of conventional tasks. However, deep neural networks (DNNs) have been demonstrated to be vulnerable to adversarial examples crafted by small noise, which is imperceptible to human observers but can make DNNs misbehave. Existing adversarial attacks can be divided into digital and physical adversarial attacks. The former is designed to … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
8
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 11 publications
(11 citation statements)
references
References 177 publications
(340 reference statements)
0
8
0
Order By: Relevance
“…We present a performance comparison of our method with other attack methods on the NIPS-17 dataset 33 when attacking some typical DNNs. We chose two modified attack methods (ColorFool 23 and NCF), 15 and three additive attack methods (including AdvLight, 19 AdvRD, 21 and RFLA 17 ) for the comparative experiment. In the experiment, we limited the query counts for RFLA and AdvLight to 100, while all other implementations followed their original optimal settings.…”
Section: Methodsmentioning
confidence: 99%
See 4 more Smart Citations
“…We present a performance comparison of our method with other attack methods on the NIPS-17 dataset 33 when attacking some typical DNNs. We chose two modified attack methods (ColorFool 23 and NCF), 15 and three additive attack methods (including AdvLight, 19 AdvRD, 21 and RFLA 17 ) for the comparative experiment. In the experiment, we limited the query counts for RFLA and AdvLight to 100, while all other implementations followed their original optimal settings.…”
Section: Methodsmentioning
confidence: 99%
“…One prevalent form of physical attack is patch attacks, 24 26 where adversarial patches are attached to objects to deceive DNNs. Another line of work attempts to generate adversarial examples using optical equipment, shadow attack 18 introduces shadows or blotchy perturbations on traffic sign images for attacks, while RFLA 17 attacks by simulating the reflected light on the target object. As shown in Fig.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations