2019
DOI: 10.1109/tvlsi.2018.2882194
|View full text |Cite
|
Sign up to set email alerts
|

Three-Dimensional nand Flash for Vector–Matrix Multiplication

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
43
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 85 publications
(43 citation statements)
references
References 12 publications
0
43
0
Order By: Relevance
“…Read disturb as well as program disturb can change the conductance of a synaptic device, reducing its accuracy. When implementing a synapse array with a NAND-type array, a pass voltage must be applied to de-selected cells of the same string during the inference operation, causing a read disturb ( Figure 10 a) [ 39 , 40 , 41 , 42 , 43 , 44 , 45 , 46 , 47 , 48 , 49 ]. However, in the proposed structure, there is little risk of a read disturb because there is no need to apply pass voltage to the word lines of de-selected cells ( Figure 10 b).…”
Section: Resultsmentioning
confidence: 99%
“…Read disturb as well as program disturb can change the conductance of a synaptic device, reducing its accuracy. When implementing a synapse array with a NAND-type array, a pass voltage must be applied to de-selected cells of the same string during the inference operation, causing a read disturb ( Figure 10 a) [ 39 , 40 , 41 , 42 , 43 , 44 , 45 , 46 , 47 , 48 , 49 ]. However, in the proposed structure, there is little risk of a read disturb because there is no need to apply pass voltage to the word lines of de-selected cells ( Figure 10 b).…”
Section: Resultsmentioning
confidence: 99%
“…In the meantime, most of the works have been studied about synaptic devices using PCM, RRAM [6] and NOR flash memory [7], [8]. Another group used the measured characteristics of a single device to implement vector matrix multiplication in the NAND flash memory architecture [12]. However, all cells in NAND flash memory cannot be fully used as synaptic devices because the size of all synapse layers is determined by the number of word lines and the number of bit lines [12].…”
Section: Introductionmentioning
confidence: 99%
“…Another group used the measured characteristics of a single device to implement vector matrix multiplication in the NAND flash memory architecture [12]. However, all cells in NAND flash memory cannot be fully used as synaptic devices because the size of all synapse layers is determined by the number of word lines and the number of bit lines [12]. In addition, in the scheme of applying input voltages to word-lines [12], it is very difficult to allow analogue input values because of nonlinearity of I BL − V WL characteristics.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Among the possible candidates in the aforementioned computing architecture, in addition to the resistive random access memory (RRAM), phase‐change random access memory (PCRAM), and MRAM, the nanoscale flash memory has shown great prospects for the hardware implementation of deep learning due to its commercialized technology, ultrahigh integration density, and high‐speed transmission . Updated researches show that the nanoscale flash memory array could be used to improve the computing efficiency of vector‐by‐matrix multiplication and a fully connected neural network was demonstrated . Whereas, the hardware realization of fully connected layers is far from enough in multilayer neural network of deep learning due to the fact that over 90% of the computation is in the form of convolution .…”
Section: Introductionmentioning
confidence: 99%