2019
DOI: 10.1088/1361-6528/ab34da
|View full text |Cite
|
Sign up to set email alerts
|

Unsupervised online learning of temporal information in spiking neural network using thin-film transistor-type NOR flash memory devices

Abstract: Brain-inspired analog neuromorphic systems based on the synaptic arrays have attracted large attention due to low-power computing. Spike-timing-dependent plasticity (STDP) algorithm is considered as one of the appropriate neuro-inspired techniques to be applied for on-chip learning. The aim of this study is to investigate the methodology of unsupervised STDP based learning in temporal encoding systems. The system-level simulation was performed based on the measurement results of thin-film transistor-type asymm… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
9
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
6

Relationship

3
3

Authors

Journals

citations
Cited by 11 publications
(9 citation statements)
references
References 30 publications
0
9
0
Order By: Relevance
“…𝑇 represent the total time step. The firing time of input neurons is inversely proportional to the input value (đŒ ) of each neuron [29][30][31][32][33]. The cumulative input function of the 𝑗 neuron is given by: where 𝑡 is the firing time of the 𝑗 neuron in the 𝑙 layer.…”
Section: A Training Algorithmmentioning
confidence: 99%
See 1 more Smart Citation
“…𝑇 represent the total time step. The firing time of input neurons is inversely proportional to the input value (đŒ ) of each neuron [29][30][31][32][33]. The cumulative input function of the 𝑗 neuron is given by: where 𝑡 is the firing time of the 𝑗 neuron in the 𝑙 layer.…”
Section: A Training Algorithmmentioning
confidence: 99%
“…Note that our network does not take into account the pulseto-pulse variation considered in many previous studies [33,50] since the weights obtained through off-chip learning are transferred once to the synaptic devices in the array. The recognition accuracy with the variation of device characteristics is compared with that of conventional rateencoded networks of the same size.…”
Section: W N I Nmentioning
confidence: 99%
“…However, utilizing an SRAM cell as a synaptic device has limitations, such as requirement of a large number of memory cells for multi-bit weight values and inefficient power consumption. [14] As an alternative hardware platform, neuromorphic systems with analog conductance synaptic devices, such as resistive random access memory, [15][16][17] phase change random access memory, [18,19] and charge-trap-based floating-gate metal-oxide-semiconductor field-effect transistors (FG-MOSFET), [20][21][22][23] have been widely researched to implement NNs with higher density, faster parallel analog computing, and lower power consumption. [14,24,25] There are two methods to train neuromorphic systems: off-chip and on-chip.…”
Section: Introductionmentioning
confidence: 99%
“…Neuromorphic system is a potential candidate for beyond von Neumann computing era to solve this issue by mimicking a massively parallel processing of biological nervous systems and has recently gained interest by demonstrating cognitive functions including pattern recognition [3][4][5][6][7][8][9][10][11][12][13]. Since synaptic devices play a key role of not only storing information but also constructing neural network and transferring signals, various kinds of artificial synaptic devices have been investigated and demonstrated including memristors and transistors [14][15][16][17][18][19][20][21]. Typically, synaptic devices are required to have analog switching characteristics considering floating-point weight values of artificial neural network.…”
Section: Introductionmentioning
confidence: 99%