2019 Design, Automation &Amp; Test in Europe Conference &Amp; Exhibition (DATE) 2019
DOI: 10.23919/date.2019.8715027
|View full text |Cite
|
Sign up to set email alerts
|

Memory Trojan Attack on Neural Network Accelerators

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
37
1

Year Published

2019
2019
2023
2023

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 35 publications
(38 citation statements)
references
References 10 publications
0
37
1
Order By: Relevance
“…HTs pose a real threat for outsourced DNN IC design, fabrication or testing activity or the use of 3PIPs within DNN hardware. Successfully embedded stealthy trigger and payload into the activation layer [97] or memory controller [98] can cause misclassification. Fortunately, hardware attacks on edge deep learning applications have so far been constrained to DNN hardware on small scale (10 categories) classification [92], [94] or are based on simulated instead of physically induced faults [96], [98] on larger network such as Ima-geNet [99] (1000 categories) classification.…”
Section: ) Trojans Insertion Throughmentioning
confidence: 99%
See 1 more Smart Citation
“…HTs pose a real threat for outsourced DNN IC design, fabrication or testing activity or the use of 3PIPs within DNN hardware. Successfully embedded stealthy trigger and payload into the activation layer [97] or memory controller [98] can cause misclassification. Fortunately, hardware attacks on edge deep learning applications have so far been constrained to DNN hardware on small scale (10 categories) classification [92], [94] or are based on simulated instead of physically induced faults [96], [98] on larger network such as Ima-geNet [99] (1000 categories) classification.…”
Section: ) Trojans Insertion Throughmentioning
confidence: 99%
“…Successfully embedded stealthy trigger and payload into the activation layer [97] or memory controller [98] can cause misclassification. Fortunately, hardware attacks on edge deep learning applications have so far been constrained to DNN hardware on small scale (10 categories) classification [92], [94] or are based on simulated instead of physically induced faults [96], [98] on larger network such as Ima-geNet [99] (1000 categories) classification. One exception is the most recently reported stealthy misclassification attack on deep learning accelerator for ImageNet applications in [100].…”
Section: ) Trojans Insertion Throughmentioning
confidence: 99%
“…By inserting the payload circuits into the activation function, especially the ReLU function, the operation of the neurons can be controlled by a selected trigger key [19]. Embedding Trojan payload in the memory controller can cause zeroing of internal features by identifying the image sequence through the memory traffic [148]. The main limitation of Trojan attacks is that the payload circuit can only be covertly implanted during the hardware development phase, usually through rogue insiders, untrusted foundries, or malicious third-party IP integration.…”
Section: B Hardware-based Attacks On Deployed ML Modelmentioning
confidence: 99%
“…Hardware trojans [63][64] [65][66] [67] are malicious components implanted into the system hardware, which compromise the security of a ML system. Hardware trojans can introduce undesired system behavior, or be dormant in the normal system operation and be triggered at a specific instance.…”
Section: Hardware Attacksmentioning
confidence: 99%