2020
DOI: 10.1109/jetcas.2020.3014250
|View full text |Cite
|
Sign up to set email alerts
|

CASH-RAM: Enabling In-Memory Computations for Edge Inference Using Charge Accumulation and Sharing in Standard 8T-SRAM Arrays

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
9
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 17 publications
(9 citation statements)
references
References 33 publications
0
9
0
Order By: Relevance
“…Agrawal et al [54] designed an 8 TB SRAM Chiplet, which uses parasitic capacitance for accumulating voltages and dot product calculation. The energy-delay product (EDP) is 38% lower than that of Von Neumann computing systems within the acceptable accuracy degradation range (1-5%), as shown in Figure 7a.…”
Section: Pim Architectures Based On Mainstream Memorymentioning
confidence: 99%
See 1 more Smart Citation
“…Agrawal et al [54] designed an 8 TB SRAM Chiplet, which uses parasitic capacitance for accumulating voltages and dot product calculation. The energy-delay product (EDP) is 38% lower than that of Von Neumann computing systems within the acceptable accuracy degradation range (1-5%), as shown in Figure 7a.…”
Section: Pim Architectures Based On Mainstream Memorymentioning
confidence: 99%
“…Processing-in-memory (PIM) can complete the data computing and storage in memory with high power efficiency in computation-intensive applications [51][52][53]. In addition, 3D memory architecture shortens the data transmission path by vertically stacking multiple Chiplets compared with 2D storage architecture and effectively reduces the energy consumption and improves the thermal reliability [54].…”
Section: Memory Architecture For Processing Datamentioning
confidence: 99%
“…Many approaches to Logic-In-Memory can be found in literature; however, two main approaches can be distinguished. The first one can be classified as Near-Memory Computing (NMC) [2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18], since the memory inner array is not modified and logic circuits are added at the periphery of this; the second one can be denoted as Logic-in-Memory (LiM) [19][20][21][22][23][24][25][26][27][28], since the memory cell is directly modified by adding logic circuits to it.…”
Section: Introductionmentioning
confidence: 99%
“…Many applications can benefit from the IMC approach, such as machine learning and deep learning algorithms [4,6,[8][9][10][11][12]14,15,19,[21][22][23][24], but also general purpose algorithms [2,5,7,13,[16][17][18]20,25,26]. For instance: in [19], a 6T SRAM cell is modified by adding two transistors and a capacitor to it, in order to perform analog computing on the whole memory, which allows to implement approximated arithmetic operations for machine learning algorithms; in [18], logic layers consisting of latches and LUTs are interleaved with memory ones in an SRAM array, in order to perform different kinds of logic operations directly inside the array; in [26], the pass transistors of the 6T SRAM cell are modified to perform logic operations directly in the cell, which allows the memory to function as an SRAM, a CAM (Content Addressable Memory) or a LiM architecture.…”
Section: Introductionmentioning
confidence: 99%
“…IMC sees data and compute as codependent entities, thereby allowing the possibility of computation in situ in the memory array. Implementing the Multiply and Accumulate (MAC) computation in memory has been the key focus of several IMC approaches in the recent past [7][8][9][10][11]. Techniques to compute the MAC in current domain [8], charge domain [9] and time-domain [11] were analyzed in these IMC approaches.…”
Section: Introductionmentioning
confidence: 99%