2022 International Symposium on VLSI Technology, Systems and Applications (VLSI-TSA) 2022
DOI: 10.1109/vlsi-tsa54299.2022.9770972
|View full text |Cite
|
Sign up to set email alerts
|

Status and challenges of in-memory computing for neural accelerators

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

1
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 6 publications
1
1
0
Order By: Relevance
“…At a multiplier of 1, the Lenet-5 shows a 2.2% drop (69.4% down to 67.2%) in accuracy compared to the ideal, unperturbed, setup and a 15% increase (52.2% to 67.2%) with respect to the conventionally-trained DNN with weight perturbation injected at evaluation-time. This result, in conjuction with recent observations on the issues with the IR drop in large PCM arrays [48], highlights the value of the device-aware training technique to construct small and robust DNNs.…”
Section: Pcm-aware Dnn Training and Evaluationsupporting
confidence: 75%
“…At a multiplier of 1, the Lenet-5 shows a 2.2% drop (69.4% down to 67.2%) in accuracy compared to the ideal, unperturbed, setup and a 15% increase (52.2% to 67.2%) with respect to the conventionally-trained DNN with weight perturbation injected at evaluation-time. This result, in conjuction with recent observations on the issues with the IR drop in large PCM arrays [48], highlights the value of the device-aware training technique to construct small and robust DNNs.…”
Section: Pcm-aware Dnn Training and Evaluationsupporting
confidence: 75%
“…By storing data in RAM, IMC systems can achieve microsecond-level access times, significantly enhancing the performance of data-intensive applications. The architecture of an IMC system is designed to maximize the advantages of rapid data access (Ielmini, D., Lepri, N., Mannocci, P., & Glukhov, A., 2022). Data is partitioned and distributed across the memory of multiple nodes in a clustered or grid environment, enabling parallel processing and fault tolerance.…”
Section: Theoretical Underpinnings Of In-memory Computingmentioning
confidence: 99%