2020 IEEE International Reliability Physics Symposium (IRPS) 2020
DOI: 10.1109/irps45951.2020.9128340
|View full text |Cite
|
Sign up to set email alerts
|

Introduction of Non-Volatile Computing In Memory (nvCIM) by 3D NAND Flash for Inference Accelerator of Deep Neural Network (DNN) and the Read Disturb Reliability Evaluation : (Invited Paper)

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 3 publications
0
3
0
Order By: Relevance
“…Read disturb as well as program disturb can change the conductance of a synaptic device, reducing its accuracy. When implementing a synapse array with a NAND-type array, a pass voltage must be applied to de-selected cells of the same string during the inference operation, causing a read disturb ( Figure 10 a) [ 39 , 40 , 41 , 42 , 43 , 44 , 45 , 46 , 47 , 48 , 49 ]. However, in the proposed structure, there is little risk of a read disturb because there is no need to apply pass voltage to the word lines of de-selected cells ( Figure 10 b).…”
Section: Resultsmentioning
confidence: 99%
“…Read disturb as well as program disturb can change the conductance of a synaptic device, reducing its accuracy. When implementing a synapse array with a NAND-type array, a pass voltage must be applied to de-selected cells of the same string during the inference operation, causing a read disturb ( Figure 10 a) [ 39 , 40 , 41 , 42 , 43 , 44 , 45 , 46 , 47 , 48 , 49 ]. However, in the proposed structure, there is little risk of a read disturb because there is no need to apply pass voltage to the word lines of de-selected cells ( Figure 10 b).…”
Section: Resultsmentioning
confidence: 99%
“…The second step is to compute inside Next and last step is to go on top of the memory hierarchy by computing directly in the SCM, where massive data storage and large vectors are available. 3D NAND Flash memory computing has been developed by [7] to improve deep neural networks power consumption. Indeed, the Flash memory is used to store the constant weights of the network.…”
Section: Related Workmentioning
confidence: 99%
“…Indeed, such solutions bring computation directly into the memory circuit, avoiding a large part of the data exchange with the CPU. Numerous IMC/NMC solutions have been proposed so far on the different levels of the memory hierarchy, from cache memory with Computing SRAM (C-SRAM) solutions [3], [4], primary memory with Computing DRAM (C-DRAM) [5], [6] and finally to storage class memory (SCM) [7]- [9] with Computing SCM (C-SCM) based on different technologies (Flash, PCM, MRAM, RRAM). All these solutions take advantage of the large memory array organisation to compute operations on very large vectors, close to the Single Instruction Multiple Data (SIMD) computation concept.…”
Section: Introductionmentioning
confidence: 99%