2018 55th ACM/ESDA/IEEE Design Automation Conference (DAC) 2018
DOI: 10.1109/dac.2018.8465903
|View full text |Cite
|
Sign up to set email alerts
|

Content Addressable Memory Based Binarized Neural Network Accelerator Using Time-Domain Signal Processing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
4
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 7 publications
(18 citation statements)
references
References 5 publications
0
4
0
Order By: Relevance
“…This is a major differentiator when compared with the state-of-the-art NN-based AI. Table 4 compares the Tsetlin machine energy efficiency with three recently reported NN approaches: a mixed-signal neuromorphic approach using time-domain arithmetic organized in a spatially unrolled neuron architecture [25], a low-power FPGA-based convolutional BNN (CBNN) approach that uses exclusive NOR (XNOR) adder-based integer weight biases to reduce the arithmetic-heavy batch normalization for synchronization between the deeper layers [26] and finally an in-memory BNN approach using parallel content-addressable memories (CAMs) to reduce the frequent data movement costs [27]. Our comparative analysis considered disparities between these approaches in terms of (i) their internal structures in both combinational and sequential parts and (ii) the size of datasets used to validate the efficiencies.…”
Section: Performance and Energy Efficiencymentioning
confidence: 99%
“…This is a major differentiator when compared with the state-of-the-art NN-based AI. Table 4 compares the Tsetlin machine energy efficiency with three recently reported NN approaches: a mixed-signal neuromorphic approach using time-domain arithmetic organized in a spatially unrolled neuron architecture [25], a low-power FPGA-based convolutional BNN (CBNN) approach that uses exclusive NOR (XNOR) adder-based integer weight biases to reduce the arithmetic-heavy batch normalization for synchronization between the deeper layers [26] and finally an in-memory BNN approach using parallel content-addressable memories (CAMs) to reduce the frequent data movement costs [27]. Our comparative analysis considered disparities between these approaches in terms of (i) their internal structures in both combinational and sequential parts and (ii) the size of datasets used to validate the efficiencies.…”
Section: Performance and Energy Efficiencymentioning
confidence: 99%
“…This section presents the proposed DWM-based BNN convolutional layer design [30]. For the implementation of the parallel XNOR-popcount operation with DWM, the DWM input buses and the DWM-based weight reorder blocks for filter sliding operations will be discussed with specific examples.…”
Section: Dwm Based Bnn Convolutional Layer Designmentioning
confidence: 99%
“…17(c). In the proposed design, to reduce the memory access in filter sliding operations, the DWM-based cell array performs the parallel XNOR-popcount operation [30].…”
Section: B Dwm Based Cell Array For Bnnmentioning
confidence: 99%
“…The versatility of RRAM devices extends to their ability to be organized in crossbar-like structures, which have found applications in various computing paradigms, including analog computing for neural network implementations 10,11 , digital logic computing 12 , and spiking neural network architectures 13,14 . Notably, RRAM devices demonstrate efficient performance in critical operations such as vector-matrix-multiplication (VMM), crucial for neural network implementations, with a constant computational complexity of O(1) regardless of the input vector's size 15 .…”
Section: Introductionmentioning
confidence: 99%