2022
DOI: 10.1088/1674-4926/43/3/031401
|View full text |Cite
|
Sign up to set email alerts
|

A review on SRAM-based computing in-memory: Circuits, functions, and applications

Abstract: Artificial intelligence (AI) processes data-centric applications with minimal effort. However, it poses new challenges to system design in terms of computational speed and energy efficiency. The traditional von Neumann architecture cannot meet the requirements of heavily data-centric applications due to the separation of computation and storage. The emergence of computing in-memory (CIM) is significant in circumventing the von Neumann bottleneck. A commercialized memory architecture, static random-access memor… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
6
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 14 publications
(6 citation statements)
references
References 76 publications
0
6
0
Order By: Relevance
“…Figure b shows the real‐time imaging process using the AsSbO₃ device, using a mercury lamp equipped with a 254 nm filter, and the image shows high contrast, illustrating the potential application of the AsSbO₃ device for sensing. The image recognition process uses a CNN, [ 46 ] and the specific structure is shown in Figure 6c. CNN can extract image features by convolutional operations, gradually reducing the data dimensionality and increasing the number of convolutional layers, thus reducing the computational complexity and realizing image recognition training.…”
Section: Resultsmentioning
confidence: 99%
“…Figure b shows the real‐time imaging process using the AsSbO₃ device, using a mercury lamp equipped with a 254 nm filter, and the image shows high contrast, illustrating the potential application of the AsSbO₃ device for sensing. The image recognition process uses a CNN, [ 46 ] and the specific structure is shown in Figure 6c. CNN can extract image features by convolutional operations, gradually reducing the data dimensionality and increasing the number of convolutional layers, thus reducing the computational complexity and realizing image recognition training.…”
Section: Resultsmentioning
confidence: 99%
“…Differentiable indirection draws its expressive power solely from memory indirections and linear interpolation. This approach aligns well with the emerging computing paradigm of compute in memory [Lin et al 2022;Wang et al 2021], which departs from traditional von Neumann model that MLPs are modelled on. We apply differentiable indirection to various tasks in the (neural) graphics pipeline, showcasing its potential as an efficient and flexible primitive for improving runtime efficiency.…”
mentioning
confidence: 74%
“…For hardware aspect, recent reviews [9], [10] explore various bitcell operations, their integration with additional logic to execute Boolean and arithmetic operations and content addressing methods. The authors present the challenges and motivation for performing near-memory and CiM operations and explain the advantages of non-Von Neumann architectures in technologies like CMOS, ReRAM, DRAM and more.…”
Section: Related Workmentioning
confidence: 99%