Proceedings of the International Conference on Computer-Aided Design 2018
DOI: 10.1145/3240765.3240803
|View full text |Cite
|
Sign up to set email alerts
|

Efficient hardware acceleration of CNNs using logarithmic data representation with arbitrary log-base

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
37
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 50 publications
(37 citation statements)
references
References 5 publications
0
37
0
Order By: Relevance
“…Some authors [20], [21] proposed logarithmic-scaled quantization for efficient encoding of edge weight parameters to reduce the memory size requirements of an ANN. In this paper, we apply logarithmic-scaled quantization to neuron activation, the output value of each neuron, instead of edge weights, in order to achieve efficient ANN-to-SNN conversion.…”
Section: Ann-to-snn Conversionmentioning
confidence: 99%
“…Some authors [20], [21] proposed logarithmic-scaled quantization for efficient encoding of edge weight parameters to reduce the memory size requirements of an ANN. In this paper, we apply logarithmic-scaled quantization to neuron activation, the output value of each neuron, instead of edge weights, in order to achieve efficient ANN-to-SNN conversion.…”
Section: Ann-to-snn Conversionmentioning
confidence: 99%
“…The technique proposed in [13] utilizes different bit-widths for each layer to reduce the errors in final output accuracy. To replace the computationally costly multiply-operation with bit-shifts, the works in [12], [14]- [18] have used the power of 2 quantizations for pretrained networks' parameters. However, most of these techniques require a fine-tuning (retraining) step to reduce the errors induced due to quantization.…”
Section: Related Workmentioning
confidence: 99%
“…It also gives accuracy comparison between linear and log quantization. [12] proposes an accelerator design using arbitrary log base. It, however, does not utilize the low hardware overhead of the log-based PE and instead rely on linear PE arrangements.…”
Section: Related Workmentioning
confidence: 99%
“…Basic log-based multiplication operation is performed in a single thread. Assuming we have two log quantized values, w q ' and a q ', representing the original weight (w q ) and the activation input (a q ) respectively, the multiplication of these values in log domain can be carried out as: [12] showed a method to implement the exponential in equation (5) in hardware by decomposing the exponent into its integer and fractional part as:…”
Section: Hardware Architecture 41 Top-levelmentioning
confidence: 99%