2018 IEEE Computer Society Annual Symposium on VLSI (ISVLSI) 2018
DOI: 10.1109/isvlsi.2018.00093
|View full text |Cite
|
Sign up to set email alerts
|

Hu-Fu: Hardware and Software Collaborative Attack Framework Against Neural Networks

Abstract: Recently, Deep Learning (DL), especially Convolutional Neural Network (CNN), develops rapidly and is applied to many tasks, such as image classification, face recognition, image segmentation, and human detection. Due to its superior performance, DL-based models have a wide range of application in many areas, some of which are extremely safety-critical, e.g. intelligent surveillance and autonomous driving. Due to the latency and privacy problem of cloud computing, embedded accelerators are popular in these safe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
29
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 44 publications
(29 citation statements)
references
References 29 publications
(29 reference statements)
0
29
0
Order By: Relevance
“…However this approach has only been tested on small synthetic model and not yet on real DNNs. Clean label attacks [51,58] [19,34] trojans hardware that neural networks are running on. It injects back-door by tampering with the circuits.…”
Section: Related Workmentioning
confidence: 99%
“…However this approach has only been tested on small synthetic model and not yet on real DNNs. Clean label attacks [51,58] [19,34] trojans hardware that neural networks are running on. It injects back-door by tampering with the circuits.…”
Section: Related Workmentioning
confidence: 99%
“…They exploit the statistical properties of each layer's output to trigger the HT, which makes the HT extremely stealthy. Li et al [117] proposed a more flexible attack framework on a neural network that combines the hardware and software. Particularly, in addition to the hardware HT circuit, Trojan weights are embedded in neural networks.…”
Section: Future Directionsmentioning
confidence: 99%
“…Particularly, in addition to the hardware HT circuit, Trojan weights are embedded in neural networks. The Trojan is only inserted in a part of the network and does not affect the overall accuracy, thus can ensure stealthy [117]. In the above attacks, the attacker needs to have the knowledge of the model.…”
Section: Future Directionsmentioning
confidence: 99%
“…One of the most common attacks is to exploit the data-dependency by manipulating/intruding the training dataset [16], [17], [18] or the corresponding labels [19]. Similarly, the baseline ML algorithm and training tools can also be attacked by adding new layers/nodes or manipulating the hyper-parameters [13], [20], [21], [22], as illustrated in Fig C. Motivating Pre-processing Noise Filter-Aware Adversarial ML Although most current adversarial ML security attacks incorporate pre-processing elements (such as shuffling, gray scaling, local histogram utilization and normalization) [23] in their design and assume that an attacker can access the output of the preprocessing noise filtering, getting this access requires hardware manipulations and is practically difficult. If an attacker, on the other hand, does not have hardware access to the pre-processing filters, it becomes very challenging to incorporate the effect of pre-processing along with noise filtering, which raises the following key research questions:…”
Section: B Challenges In Resisting ML Security Attacksmentioning
confidence: 99%