2018
DOI: 10.1109/jetcas.2018.2829522
|View full text |Cite
|
Sign up to set email alerts
|

An In-Memory VLSI Architecture for Convolutional Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
31
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 55 publications
(31 citation statements)
references
References 23 publications
0
31
0
Order By: Relevance
“…When the precharge circuit is disabled, while the WL signal is still high, the circuit performs normal conventional read and discharges BL or BLB depending on data stored in bit cell. Many inmemory compute primitives [12], [7], [13] have shown similar kind of functional read operations where they connect the internal storage node to the bit line in order to enable analog in-memory compute.…”
Section: A Read Stabilitymentioning
confidence: 99%
See 3 more Smart Citations
“…When the precharge circuit is disabled, while the WL signal is still high, the circuit performs normal conventional read and discharges BL or BLB depending on data stored in bit cell. Many inmemory compute primitives [12], [7], [13] have shown similar kind of functional read operations where they connect the internal storage node to the bit line in order to enable analog in-memory compute.…”
Section: A Read Stabilitymentioning
confidence: 99%
“…It is worth mentioning that the ReLU operation energy consumption can be neglected compared to other operations. The delay (T V N ) and the energy (E V N ) of convolution operation in the conventional von Neumann architecture are given by equation ( 7) and (8) similar to [12]. The notations used in the equations below are described in Table III.…”
Section: System Level Analysismentioning
confidence: 99%
See 2 more Smart Citations
“…As shown in Fig. 1(a), the first method is the multi-row reading [11,12,13,14], usually in the form of bit-line (BL / BLB) voltage, which opens multi-row at the same time through binary-weighted word-line (WL) pulse width [11,12] or weighted height [13,14]. The advantage of this method is that it can read multiple rows of data in the storage array at the same time, which increases the throughput and reduces the reading time.…”
Section: Introductionmentioning
confidence: 99%