2022
DOI: 10.48550/arxiv.2209.14262
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A Survey on Physical Adversarial Attack in Computer Vision

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 92 publications
0
3
0
Order By: Relevance
“…For example, Wei et al [14] focus on a subset of 41 physical adversarial attacks, while Fang et al [15] specifically address optical-based physical adversarial attacks. Similarly, Wang et al [16] primarily survey the tasks of image recognition and object detection, with some coverage of physical adversarial attacks in object tracking and semantic segmentation. However, their coverage is limited to a total of 71 attacks, with 38 attacks related to image recognition and 33 attacks related to object detection.…”
Section: Introductionmentioning
confidence: 99%
“…For example, Wei et al [14] focus on a subset of 41 physical adversarial attacks, while Fang et al [15] specifically address optical-based physical adversarial attacks. Similarly, Wang et al [16] primarily survey the tasks of image recognition and object detection, with some coverage of physical adversarial attacks in object tracking and semantic segmentation. However, their coverage is limited to a total of 71 attacks, with 38 attacks related to image recognition and 33 attacks related to object detection.…”
Section: Introductionmentioning
confidence: 99%
“…To comprehensively assess the robustness of these models, we conducted rigorous experiments involving a diverse set of classifiers and detectors, representing a wide range of mainstream methods. Through this extensive evaluation, we have uncovered insightful and intriguing findings that illuminate the relationship between the crafting of [32] IEEE Access ✓ ✕ ✕ ✕ ✕ ✕ 2018 [34] Computer Science Review ✓ ✕ ✕ ✕ ✕ ✕ 2018 [31] arXiv ✓ ✕ ✕ ✕ ✕ ✕ 2019 [33] Applied Science ✓ ✕ ✕ ✕ ✕ ✕ 2020 [42] ACM Computing Surveys ✓ ✕ ✕ ✕ ✕ ✕ 2021 [35] IEEE Access ✓ ✕ ✕ ✕ ✕ ✕ 2021 [41] ACM Computing Surveys ✓ ✕ ✕ ✕ ✕ ✕ 2021 [40] TII ✓ ✕ ✕ ✕ ✕ ✕ 2022 [47] arXiv ✓ * ✕ ✕ ✕ ✕ 2022 [48] INJOIT ✓ ✕ ✕ ✕ ✕ ✕ 2022 [49] Artificial Intelligence Review ✓ ✕ ✓ ✕ ✓ ✕ 2022 [39] TPAMI ✓ ✕ ✕ ✕ ✕ ✕ 2022 [38] TII ✕ ✕ ✕ ✕ ✕ ✕ 2022 [49] arXiv ✓ * ✕ ✕ ✕ ✕ 2022 [25] arXiv ✓ ✕ ✕ ✕ ✕ ✕ 2022 [44] arXiv ✓ * ✕ ✕ ✕ ✕ 2022 [45] arXiv * ✓ ✕ ✕ ✕ ✕ 2022 [37] Neurocomputing ✓ ✕ ✕ ✕ ✕ ✕ 2023 [50] ACM Computing Surveys * ✕ ✕ ✕ ✕ ✕ 2023 [28] arXiv ✓ ✕ ✕ ✕ ✕ ✕ 2023 [46] ICAI * ✓ ✕ ✕ ✕ ✕ Benchmarks 2020 [29] CVPR ✕ ✕ ✓ ✕ ✓ ✕ 2021 [27] arXiv ✕ ✕ ✓ ✕ ✓ ✓ 2022 [51] IJCAI ✕ ✕ ✕ ✓ ✓ ✕ 2022 [26] NIPS ✕ ✕ ✓ ✕ ✓ ✕ 2022 [52] arXiv ✕ ✕ ✕ ✓ ✓ ✕ 2022 [36] Pattern Recognition ✕ ✕ ✓ ✕ ✓ ✕ 2023 [30] arXiv ✕ ✕ ✓ ✕ ✓ ✓ 2023 [53] Pattern Recognition ✕ ✕ ✓ ✕ ✓ ✕ 2023 [54] CVPR…”
Section: Introductionmentioning
confidence: 99%
“…The evaluation of explainability of DNN models is known to be a challenging task, necessitating such an effort. From another perspective, while there have been many surveys of literature on adversarial attacks and robustness [7,8,11,25,29,35,46,51,57,61,65,69,75,77,101,104,112,113,116,118,119,121,122,129,135] -which focus on attacking the predictive outcome of these models, there have been no effort so far to study and consolidate existing efforts on attacks on explainability of DNN models. Many recent efforts have demonstrated the vulnerability of explanations (or attributions 1 ) to human-imperceptible input perturbations across image, text and tabular data [36,45,55,62,107,108,133].…”
Section: Introductionmentioning
confidence: 99%