2020
DOI: 10.48550/arxiv.2012.06024
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Robustness and Transferability of Universal Attacks on Compressed Models

Alberto G. Matachana,
Kenneth T. Co,
Luis Muñoz-González
et al.

Abstract: Neural network compression methods like pruning and quantization are very effective at efficiently deploying Deep Neural Networks (DNNs) on edge devices. However, DNNs remain vulnerable to adversarial examples-inconspicuous inputs that are specifically designed to fool these models. In particular, Universal Adversarial Perturbations (UAPs), are a powerful class of adversarial attacks which create adversarial perturbations that can generalize across a large set of inputs. In this work, we analyze the effect of … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 28 publications
(43 reference statements)
0
1
0
Order By: Relevance
“…UAP present a systemic risk, as they enable practical and physically realizable adversarial attacks. They have been demonstrated in many widely-used and safety-critical applications such as camera-based computer vision [7,8,2,18] and LiDAR-based object detection [10,11]. UAPs have also been shown to facilitate realistic attacks in both the physical [23] and digital [24] domains.…”
Section: Introductionmentioning
confidence: 99%
“…UAP present a systemic risk, as they enable practical and physically realizable adversarial attacks. They have been demonstrated in many widely-used and safety-critical applications such as camera-based computer vision [7,8,2,18] and LiDAR-based object detection [10,11]. UAPs have also been shown to facilitate realistic attacks in both the physical [23] and digital [24] domains.…”
Section: Introductionmentioning
confidence: 99%