Proceedings of the 56th Annual Design Automation Conference 2019 2019
DOI: 10.1145/3316781.3317825
|View full text |Cite
|
Sign up to set email alerts
|

Fault Sneaking Attack

Abstract: Despite the great achievements of deep neural networks (DNNs), the vulnerability of state-of-the-art DNNs raises security concerns of DNNs in many application domains requiring high reliability. We propose the fault sneaking a ack on DNNs, where the adversary aims to misclassify certain input images into any target labels by modifying the DNN parameters. We apply ADMM (alternating direction method of multipliers) for solving the optimization problem of the fault sneaking a ack with two constraints: 1) the clas… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
15
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 48 publications
(15 citation statements)
references
References 32 publications
0
15
0
Order By: Relevance
“…As NN-based solutions becoming popular for many applications, NN security arises as a major concern for the practical deployment of NNs in safety-critical tasks. Lots of efforts have been devoted to investigating the security of NNs, especially, the malicious attack approaches ( [15], [16], [17]). The attack object of these attacks is either the input sample (i.e., adversarial attack) or the neural network weights (i.e., fault injection attack).…”
Section: Security Concernsmentioning
confidence: 99%
See 4 more Smart Citations
“…As NN-based solutions becoming popular for many applications, NN security arises as a major concern for the practical deployment of NNs in safety-critical tasks. Lots of efforts have been devoted to investigating the security of NNs, especially, the malicious attack approaches ( [15], [16], [17]). The attack object of these attacks is either the input sample (i.e., adversarial attack) or the neural network weights (i.e., fault injection attack).…”
Section: Security Concernsmentioning
confidence: 99%
“…While the adversaries with expert knowledge concern the effectiveness and stealthiness of the attack, they can manipulate the network prediction for a given image at will through fault injection techniques while ensuring model accuracy on other images. Previous studies have demonstrated that it is practical to launch precise and effective fault injection attacks on neural networks [16], [17].…”
Section: Security Concernsmentioning
confidence: 99%
See 3 more Smart Citations