2021
DOI: 10.48550/arxiv.2106.05825
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

HASI: Hardware-Accelerated Stochastic Inference, A Defense Against Adversarial Machine Learning Attacks

Abstract: DNNs are known to be vulnerable to so-called adversarial attacks, in which inputs are carefully manipulated to induce misclassification. Existing defenses are mostly softwarebased and come with high overheads or other limitations. This paper presents HASI, a hardware-accelerated defense that uses a process we call stochastic inference to detect adversarial inputs. HASI carefully injects noise into the model at inference time and used the model's response to differentiate adversarial inputs from benign ones. We… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 24 publications
(35 reference statements)
0
1
0
Order By: Relevance
“…Several techniques have been proposed to mitigate adversarial attacks [36], [43], for example, pre-processing-based defenses [33], [44], [45], [46], [47], [48], gradient masking [49], adversarial training [50], and dataset encryption [51], but these techniques are model-specific or require access to the complete model parameters [52]. Hence, these techniques cannot be applied in the more stringent (and arguably realistic) threat model for HC-based inference.…”
Section: A Related Workmentioning
confidence: 99%
“…Several techniques have been proposed to mitigate adversarial attacks [36], [43], for example, pre-processing-based defenses [33], [44], [45], [46], [47], [48], gradient masking [49], adversarial training [50], and dataset encryption [51], but these techniques are model-specific or require access to the complete model parameters [52]. Hence, these techniques cannot be applied in the more stringent (and arguably realistic) threat model for HC-based inference.…”
Section: A Related Workmentioning
confidence: 99%