2018 IEEE International Symposium on Circuits and Systems (ISCAS) 2018
DOI: 10.1109/iscas.2018.8350945
|View full text |Cite
|
Sign up to set email alerts
|

A Convolutional Accelerator for Neural Networks With Binary Weights

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 11 publications
(5 citation statements)
references
References 8 publications
0
5
0
Order By: Relevance
“…In particular, the fact that the number of zero synapses acts as an indicator of concept formation is intimately related to the spontaneous symmetry breaking in the model. These findings may also have implications on promising deep neuromorphic computation with discrete synapses [26].…”
Section: Discussionmentioning
confidence: 80%
“…In particular, the fact that the number of zero synapses acts as an indicator of concept formation is intimately related to the spontaneous symmetry breaking in the model. These findings may also have implications on promising deep neuromorphic computation with discrete synapses [26].…”
Section: Discussionmentioning
confidence: 80%
“…The first step consists in performing transfer using the internal layers of a pre-trained DNN acting as a generic feature extractor to compute the feature vector x m , corresponding to the input signal s m . Since there is already a lot of literature on the subject of hardware implementation of the inference of DNNs [11]- [13], we disregard this first step in the implementation described in this paper and directly consider processing the vector x m . Then, each feature vector x m is split into P subvectors of equal size denoted x m p 1≤p≤P .…”
Section: Proposed Methodsmentioning
confidence: 99%
“…Several approaches have been introduced aiming at reducing DNNs size and complexity, such as using product quantization (PQ) to factorize DNNs weights [2], [3], binarizing both DNNs weights and activations [4]- [6], pruning DNN connections [7], [8], or replacing the spatial convolution by a shift operation followed by 1 × 1 convolution [9], [10]. These methods ease the implementation of big and complex DNNs on embedded systems [11]- [13] as far as the inference part is concerned. However, because it requires storing the whole dataset and causes increased complexity, the training of DNNs is in most cases still performed offline.…”
Section: Introductionmentioning
confidence: 99%
“…A potential manner in which AI can be implemented in IoT devices is via a binarized CNN (BCNN) [3], where network parameters are expressed in a binary format with an inference accuracy comparable to that of the original CNN. Several researches have been conducted on implementing BCNN accelerators using various hardware platforms such as GPUs [4], ASICs [5][6][7][8][9][10][11][12][13][14][15][16], and FPGAs [17][18][19][20][21][22][23][24].…”
Section: Introductionmentioning
confidence: 99%