2021
DOI: 10.1109/ted.2021.3089450
|View full text |Cite
|
Sign up to set email alerts
|

3-D AND-Type Flash Memory Architecture With High-κ Gate Dielectric for High-Density Synaptic Devices

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6

Relationship

4
2

Authors

Journals

citations
Cited by 18 publications
(10 citation statements)
references
References 21 publications
0
10
0
Order By: Relevance
“…The flash cells are configured in an ANDtype array architecture, in which source lines and drain lines are parallel. The AND-type array has the advantages of low-power selective write operations, scalability, and large-scale parallel computing (41)(42)(43). On the other hand, selective write operations in a NOR-type array consume a lot of energy, and it is difficult to perform large-scale parallel operations in a NAND-type array due to a cell string structure.…”
Section: Device Characterizationmentioning
confidence: 99%
“…The flash cells are configured in an ANDtype array architecture, in which source lines and drain lines are parallel. The AND-type array has the advantages of low-power selective write operations, scalability, and large-scale parallel computing (41)(42)(43). On the other hand, selective write operations in a NOR-type array consume a lot of energy, and it is difficult to perform large-scale parallel operations in a NAND-type array due to a cell string structure.…”
Section: Device Characterizationmentioning
confidence: 99%
“…Unlike software-based weight initialization, the synaptic devices can only have limited number of conductance values (weights). More recently, the synaptic devices are realized by a single memory device including chargetrap flash (CTF) memory [16], resistive-switching randomaccess memory (RRAM) [6,[17][18][19][20], phase-change randomaccess memory (PRAM) [21], ferroelectric random-access memory (FRAM) [22], and magnetic random-access memory (MRAM) [23] for increasing the synapse array density and minimizing the power consumption, departing from the conventional synapses made up of circuits or several electron devices. Also, many of today's state-of-the-art deep learning models such as Inception v1, VGG-19, ResNet, and others contain huge number of neurons with millions of parameters [24][25][26][27].…”
Section: Motivationmentioning
confidence: 99%
“…The activation matrix is given by the matrix K such that K ∈ R M×n .The rows of K can be considered as M data points in an n dimensional space since K is an M × n matrix. From a hardware perspective, realization of such an SLFN would require synaptic devices like resistive-switching randomaccess memory [17], charge-trap flash memory [16], fieldeffect transistor [39], phase-change memory [21] for the storage of weight values cooperating with CMOS neuron circuits [40,41]. Essentially, the synaptic devices utilize their electrical conductance states as the equivalent weight values and the neuron circuits are operated based on the behavior of the activation function.…”
Section: Mathematical Preliminariesmentioning
confidence: 99%
“…However, there is a drawback that an error occurs in the current summation due to a problem such as sneak current, which requires additional active selecting components and results in one-selectorone-resistor (1S-1R) or one-transistor-one-resistor (1T-1R) array structures. [30][31][32][33][34] In contrast, transistor-type synaptic devices are free from these issues thanks to the gate electrode, [35][36][37][38] and these devices are typically integrated in NAND or NOR array structures. NOR-type structure offers the advantage of parallel matrix operations, similar to artificial neural networks (ANNs), but requires individual drain contacts for each cell, resulting in an area inefficiency of over 10F 2 .…”
Section: Introductionmentioning
confidence: 99%