2020
DOI: 10.1109/jssc.2019.2952773
|View full text |Cite
|
Sign up to set email alerts
|

A Twin-8T SRAM Computation-in-Memory Unit-Macro for Multibit CNN-Based AI Edge Processors

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
58
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 133 publications
(58 citation statements)
references
References 28 publications
0
58
0
Order By: Relevance
“…This value can also be obtained via (16) and corresponds to the final result of the multiplication operation. Had the sign been negative, the negative precharge voltage −V pre would have been used, and V C,out [10] would have been −15/32 V.…”
Section: Numerical Examplementioning
confidence: 99%
See 1 more Smart Citation
“…This value can also be obtained via (16) and corresponds to the final result of the multiplication operation. Had the sign been negative, the negative precharge voltage −V pre would have been used, and V C,out [10] would have been −15/32 V.…”
Section: Numerical Examplementioning
confidence: 99%
“…Static random access memory (SRAM) can in a similar way to PCM and ReRAM be enhanced with IMC capabilities. A native approach consists of using the ON-resistance R DS,ON of the pull-down transistors to convert the digital bits stored in the SRAM cells into a proportional current value [14]- [16]. However, serious practical limitations of R DS,ON -based IMC in SRAM arise from cell-to-cell variations, as well as from the risk of unintended overwriting of stored bits.…”
mentioning
confidence: 99%
“…To calculate the weight gradient of the layer n, the activations of the layer n-1 and the errors or the layer n are required as shown in (3). The activations need to be read from the array first and computed with errors, then the results are stored.…”
Section: Weight Gradient Calculation Processmentioning
confidence: 99%
“…Recently, various types of CIM architectures have been investigated using memory cells as a synaptic device for weighted sum or vector-matrix multiplication (VMM). CIM accelerators based on the mainstream device technologies such as SRAM [3][4], NOR Flash [5] and NAND Flash [6][7][8][9] have been proposed and verified in silicon. Furthermore, the emerging nonvolatile (NVM) memories such as RRAM [10][11][12][13][14][15] and PCM [16][17] have been considered as strong candidates due to the multilevel capability (over SRAM) and lower programming voltage (over Flash).…”
Section: Introductionmentioning
confidence: 99%
“…In DNN, many operations need to read or write data/coefficients from/to memory module (SRAM, ReRAM, or DRAM), which cost a lot of power consumption. In recent years, in order to embed computational functions in the memory cell array and its peripheral circuit in mixed-signal domain, several IMC and near-memory computing (NMC) hardware architectures [3,6,10,14] are proposed to increase the energy efficiency and the parallelism. However, implementing a suitable DNN for different AI edge applications using IMC in the early design stage (system level architecture design) is very important.…”
Section: Circuit Design Of Imc Ai Edge Devicesmentioning
confidence: 99%