2017
DOI: 10.1109/jssc.2017.2712626
|View full text |Cite
|
Sign up to set email alerts
|

A Neuromorphic Chip Optimized for Deep Learning and CMOS Technology With Time-Domain Analog and Digital Mixed-Signal Processing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
65
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 105 publications
(75 citation statements)
references
References 13 publications
0
65
0
Order By: Relevance
“…e main di erence with respect to Refs. [19][20][21] is that our approach allows for precise four-quadrant VMM using analog input and weights. Unlike the work presented in Ref.…”
Section: Introductionmentioning
confidence: 99%
“…e main di erence with respect to Refs. [19][20][21] is that our approach allows for precise four-quadrant VMM using analog input and weights. Unlike the work presented in Ref.…”
Section: Introductionmentioning
confidence: 99%
“…This is a major differentiator when compared with the state-of-the-art NN-based AI. Table 4 compares the Tsetlin machine energy efficiency with three recently reported NN approaches: a mixed-signal neuromorphic approach using time-domain arithmetic organized in a spatially unrolled neuron architecture [25], a low-power FPGA-based convolutional BNN (CBNN) approach that uses exclusive NOR (XNOR) adder-based integer weight biases to reduce the arithmetic-heavy batch normalization for synchronization between the deeper layers [26] and finally an in-memory BNN approach using parallel content-addressable memories (CAMs) to reduce the frequent data movement costs [27]. Our comparative analysis considered disparities between these approaches in terms of (i) their internal structures in both combinational and sequential parts and (ii) the size of datasets used to validate the efficiencies.…”
Section: Performance and Energy Efficiencymentioning
confidence: 99%
“…However, the area of the PE unit can be significantly reduced, since the multiplications of a binary input activation and a binary weight can be replaced with bitwise AND operations. Due to the smaller area of PE units and the reduced buffer sizes for storing binary parameters (input activation and weight), the fully parallel architecture [24] is usually adopted. In this architecture, both input activations and weights are preloaded to compute the output activation within one clock.…”
Section: ) Bnn Convolutional Layer Designmentioning
confidence: 99%
“…It consists of the DWM-based cell array for BNN, the accumulators, and the output buffer. In the BNN design, the reduced parameters (input activation and weight) facilitate the spatially unrolled architecture [24], for which the input activations and weights are fully preloaded in the DWM input buses and the DWM-based weight reorder blocks, respectively. Fig.…”
Section: A Dwm-based Bnn Convolutional Layermentioning
confidence: 99%