Our system is currently under heavy load due to increased usage. We're actively working on upgrades to improve performance. Thank you for your patience.
2020 2nd IEEE International Conference on Artificial Intelligence Circuits and Systems (AICAS) 2020
DOI: 10.1109/aicas48895.2020.9073854
|View full text |Cite
|
Sign up to set email alerts
|

Layerwise Noise Maximisation to Train Low-Energy Deep Neural Networks

Abstract: Deep neural networks (DNNs) depend on the storage of a large number of parameters, which consumes an important portion of the energy used during inference. This paper considers the case where the energy usage of memory elements can be reduced at the cost of reduced reliability. A training algorithm is proposed to optimize the reliability of the storage separately for each layer of the network, while incurring a negligible complexity overhead compared to a conventional stochastic gradient descent training. For … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 15 publications
0
5
0
Order By: Relevance
“…Training with errors may not be desired because the parameters are modified, while injecting errors during training is very time consuming or may even be infeasible in some scenarios. Furthermore, using the same p for every layer in the BNNs may not be an optimal option for p, since different layers in NNs exhibit different sensitivity to error [26], [27].…”
Section: B Error-resilient Bnns During Run-timementioning
confidence: 99%
“…Training with errors may not be desired because the parameters are modified, while injecting errors during training is very time consuming or may even be infeasible in some scenarios. Furthermore, using the same p for every layer in the BNNs may not be an optimal option for p, since different layers in NNs exhibit different sensitivity to error [26], [27].…”
Section: B Error-resilient Bnns During Run-timementioning
confidence: 99%
“…voltage is scaled in [38], [39]. Yang et al [40] seperately tune weight and activation values of BNNs to achieve finegrained control over energy comsumption.…”
Section: Related Workmentioning
confidence: 99%
“…In order to reduce its energy consumption, the quantized Kalman filter can be implemented on unreliable hardware [ 8 , 10 , 11 , 12 ]. Here, we assume, as in [ 10 , 12 ], that only the memory is faulty. In this case, each memory cell of a memory bank has a bit flipping probability p .…”
Section: System Modelmentioning
confidence: 99%
“…The robustness to unreliability in computation operations and memories has been investigated for several signal processing and machine-learning applications, including binary recursive estimation [ 10 ], binary linear transformation [ 11 ], deep neural networks [ 12 , 13 ], multi-agent systems [ 14 ] and distributed logistic regression [ 15 ]. Moreover, several techniques have been proposed to compensate for faults introduced by unreliable systems.…”
Section: Introductionmentioning
confidence: 99%