2021
DOI: 10.1038/s41928-021-00676-9
|View full text |Cite
|
Sign up to set email alerts
|

A four-megabit compute-in-memory macro with eight-bit precision based on CMOS and resistive random-access memory for AI edge devices

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
9
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 49 publications
(14 citation statements)
references
References 46 publications
0
9
0
Order By: Relevance
“…Stochastic non-idealities such as RRAM conductance relaxation and read noises degrade the signal-to-noise ratio (SNR) of the computation, leading to an inference accuracy drop. Some previous work obtained a higher SNR by limiting each RRAM cell to store a single bit, and encoding higher-precision weights using multiple cells 9,10,16 . Such an approach lowers the weight memory density.…”
Section: Articlementioning
confidence: 99%
See 2 more Smart Citations
“…Stochastic non-idealities such as RRAM conductance relaxation and read noises degrade the signal-to-noise ratio (SNR) of the computation, leading to an inference accuracy drop. Some previous work obtained a higher SNR by limiting each RRAM cell to store a single bit, and encoding higher-precision weights using multiple cells 9,10,16 . Such an approach lowers the weight memory density.…”
Section: Articlementioning
confidence: 99%
“…Compute-in-memory (CIM) based on resistive random-access memory (RRAM) 1 promises to meet such demand by storing AI model weights in dense, analogue and non-volatile RRAM devices, and by performing AI computation directly within RRAM, thus eliminating power-hungry data movement between separate compute and memory [2][3][4][5] . Although recent studies have demonstrated in-memory matrix-vector multiplication on fully integrated RRAM-CIM hardware [6][7][8][9][10][11][12][13][14][15][16][17] , it remains a goal for a RRAM-CIM chip to simultaneously deliver high energy efficiency, versatility to support diverse models and software-comparable accuracy. Although efficiency, versatility and accuracy are all indispensable for broad adoption of the technology, the inter-related trade-offs among them cannot be addressed by isolated improvements on any single abstraction level of the design.…”
mentioning
confidence: 99%
See 1 more Smart Citation
“…52 A high-energy-efficient and low latency hardware structure has been realized, incorporating memristor devices with CMOS and resistive random-access memory (RRAM) 53 structure, capable of performing computation in memory. 54 By the year 2022, a device based on memristor crossbar arrays for on-chip communication has been developed and utilized for parallel data processing. 55 It was long believed that the memristor was a theoretical element and that such a circuit element did not exist in reality.…”
Section: Historical Development Of the Memristormentioning
confidence: 99%
“…Memristors are attractive for neuromorphic hardware because they have low power consumption and high density. 54 They can be used to create artificial neural networks for recognizing patterns, processing images, and performing other machine learning tasks. Additionally, they can be used to create analog circuits for signal processing and control applications.…”
Section: Memristor-based Artificial Neuron and Synapsementioning
confidence: 99%