2023
DOI: 10.48550/arxiv.2302.09457
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Adversarial Machine Learning: A Systematic Survey of Backdoor Attack, Weight Attack and Adversarial Example

Abstract: Adversarial machine learning (AML) studies the adversarial phenomenon of machine learning, which may make inconsistent or unexpected predictions with humans. Some paradigms have been recently developed to explore this adversarial phenomenon occurring at different stages of a machine learning system, such as training-time adversarial attack (i.e., backdoor attack), deployment-time adversarial attack (i.e., weight attack), and inference-time adversarial attack (i.e., adversarial example). However, although these… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(7 citation statements)
references
References 176 publications
0
5
0
Order By: Relevance
“…Backdoor attack is firstly proposed in 2D image domain (Gu, Dolan-Gavitt, and Garg 2017). Then, a lot of attacks was developed (Zhao et al 2022;Wu et al 2023;Feng et al 2022;Yuan et al 2023;Doan et al 2023). However, due to data format restriction, attacks for 2D image cannot directly apply to 3D point cloud.…”
Section: Related Workmentioning
confidence: 99%
“…Backdoor attack is firstly proposed in 2D image domain (Gu, Dolan-Gavitt, and Garg 2017). Then, a lot of attacks was developed (Zhao et al 2022;Wu et al 2023;Feng et al 2022;Yuan et al 2023;Doan et al 2023). However, due to data format restriction, attacks for 2D image cannot directly apply to 3D point cloud.…”
Section: Related Workmentioning
confidence: 99%
“…Consequently, the proposed vicinage-attack can be seen as a vulnerability test for trust systems. Reference [40] outlines three basic conditions for adversarial attacks: stealthiness, consistency, and inconsistency. The vulnerability of a trust system is also reflected in these three aspects.…”
Section: Vulnerability Testing On Trust Systemmentioning
confidence: 99%
“…For instance, a model could yield significantly different predictions on two visually similar images, with one being perturbed by malicious and imperceptible noises [15], [16], whereas a human's prediction would remain unaffected by such noises. We refer to this phenomenon as the adversarial phenomenon or adversarial attack, signifying the inherent adversarial relationship between DL models and human perception [28].…”
Section: A Background Knowledgementioning
confidence: 99%
“…To comprehensively assess the robustness of these models, we conducted rigorous experiments involving a diverse set of classifiers and detectors, representing a wide range of mainstream methods. Through this extensive evaluation, we have uncovered insightful and intriguing findings that illuminate the relationship between the crafting of [32] IEEE Access ✓ ✕ ✕ ✕ ✕ ✕ 2018 [34] Computer Science Review ✓ ✕ ✕ ✕ ✕ ✕ 2018 [31] arXiv ✓ ✕ ✕ ✕ ✕ ✕ 2019 [33] Applied Science ✓ ✕ ✕ ✕ ✕ ✕ 2020 [42] ACM Computing Surveys ✓ ✕ ✕ ✕ ✕ ✕ 2021 [35] IEEE Access ✓ ✕ ✕ ✕ ✕ ✕ 2021 [41] ACM Computing Surveys ✓ ✕ ✕ ✕ ✕ ✕ 2021 [40] TII ✓ ✕ ✕ ✕ ✕ ✕ 2022 [47] arXiv ✓ * ✕ ✕ ✕ ✕ 2022 [48] INJOIT ✓ ✕ ✕ ✕ ✕ ✕ 2022 [49] Artificial Intelligence Review ✓ ✕ ✓ ✕ ✓ ✕ 2022 [39] TPAMI ✓ ✕ ✕ ✕ ✕ ✕ 2022 [38] TII ✕ ✕ ✕ ✕ ✕ ✕ 2022 [49] arXiv ✓ * ✕ ✕ ✕ ✕ 2022 [25] arXiv ✓ ✕ ✕ ✕ ✕ ✕ 2022 [44] arXiv ✓ * ✕ ✕ ✕ ✕ 2022 [45] arXiv * ✓ ✕ ✕ ✕ ✕ 2022 [37] Neurocomputing ✓ ✕ ✕ ✕ ✕ ✕ 2023 [50] ACM Computing Surveys * ✕ ✕ ✕ ✕ ✕ 2023 [28] arXiv ✓ ✕ ✕ ✕ ✕ ✕ 2023 [46] ICAI * ✓ ✕ ✕ ✕ ✕ Benchmarks 2020 [29] CVPR ✕ ✕ ✓ ✕ ✓ ✕ 2021 [27] arXiv ✕ ✕ ✓ ✕ ✓ ✓ 2022 [51] IJCAI ✕ ✕ ✕ ✓ ✓ ✕ 2022 [26] NIPS ✕ ✕ ✓ ✕ ✓ ✕ 2022 [52] arXiv ✕ ✕ ✕ ✓ ✓ ✕ 2022 [36] Pattern Recognition ✕ ✕ ✓ ✕ ✓ ✕ 2023 [30] arXiv ✕ ✕ ✓ ✕ ✓ ✓ 2023 [53] Pattern Recognition ✕ ✕ ✓ ✕ ✓ ✕ 2023 [54] CVPR…”
Section: Introductionmentioning
confidence: 99%