2020
DOI: 10.15803/ijnc.10.2_84
|View full text |Cite
|
Sign up to set email alerts
|

A Hardware-efficient Weight Sampling Circuit for Bayesian Neural Networks

Abstract: The main problems of deep learning are requiring a large amount of data for learning, and prediction with excessive confidence. A Bayesian neural network (BNN), in which a Bayesian approach is incorporated into a neural network (NN), has drawn attention as a method for solving these problems. In a BNN, the probability distribution is assumed for the weight, in contrast to a conventional NN, in which the weight is point estimated. This makes it possible to obtain the prediction as a distribution and to evaluate… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(5 citation statements)
references
References 2 publications
0
5
0
Order By: Relevance
“…We have demonstrated Gaussian synapses for a probabilistic neural network in our prior work, where the synapses were realized by mimicking the Gaussian function 33 . However, BNN accelerators require GRNGs and they typically rely on techniques such as cumulative density function inversion, central limit theorem (CLT)-based approximation, and the Wallace method to sample standard GRNs 18 – 20 . These methods typically require linear feedback shift registers, multipliers, and adders, involving numerous transistors to implement the GRNGs, rendering them area and energy inefficient.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…We have demonstrated Gaussian synapses for a probabilistic neural network in our prior work, where the synapses were realized by mimicking the Gaussian function 33 . However, BNN accelerators require GRNGs and they typically rely on techniques such as cumulative density function inversion, central limit theorem (CLT)-based approximation, and the Wallace method to sample standard GRNs 18 – 20 . These methods typically require linear feedback shift registers, multipliers, and adders, involving numerous transistors to implement the GRNGs, rendering them area and energy inefficient.…”
Section: Resultsmentioning
confidence: 99%
“…Since the training process in neural networks is energy and resource intensive, these works typically rely on off-chip training and on-chip inference. Hence, BNN accelerators have also mostly focused on implementing Bayesian inference on-chip 18 – 23 . A crucial component of the BNN accelerator is an on-chip Gaussian random number generator (GRNG)-based synapse that can sample weights from a Gaussian distribution.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Since the training process in neural networks is energy and resource intensive, these works typically rely on off-chip training and on-chip inference. Hence, BNN accelerators have also mostly focused on implementing Bayesian inference on-chip [16,[20][21][22][23][24]. A crucial component of the BNN accelerator is an on-chip Gaussian random number generator [22][23][24].…”
Section: 𝑃𝑃(π‘Šπ‘Š|𝐷𝐷) =mentioning
confidence: 99%
“…BNN accelerators rely on techniques such as cumulative density function inversion, central limit theorem (CLT)-based approximation, and the Wallace method to generate standard GRNs [16,20,21]. These methods typically require linear feedback shift registers, multipliers, and adders, involving numerous transistors to implement the GRNGs, rendering them area and energy inefficient.…”
Section: Gaussian Random Number Generator-based Synapsementioning
confidence: 99%